以下代码由GenerateUnsafeProjection生成,我将生成的代码粘贴到Java源文件中,但它有编译错误。
import org.apache.spark.sql.catalyst.InternalRow;
import org.apache.spark.sql.catalyst.expressions.UnsafeRow;
public class MyUnsafeProjectionGenerator {
public java.lang.Object generate(Object[] references) {
return new SpecificUnsafeProjection(references);
}
}
class SpecificUnsafeProjection extends org.apache.spark.sql.catalyst.expressions.UnsafeProjection {
private Object[] references;
private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[1];
public SpecificUnsafeProjection(Object[] references) {
this.references = references;
mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(3, 0);
}
public void initialize(int partitionIndex) {
}
// Scala.Function1 need this
public java.lang.Object apply(java.lang.Object row) {///Compiling ERROR occurs here
return apply((InternalRow) row);
}
public UnsafeRow apply(InternalRow i) { //
mutableStateArray_0[0].reset();
mutableStateArray_0[0].zeroOutNullBytes();
boolean isNull_0 = i.isNullAt(0);
int value_0 = isNull_0 ?
-1 : (i.getInt(0));
if (isNull_0) {
mutableStateArray_0[0].setNullAt(0);
} else {
mutableStateArray_0[0].write(0, value_0);
}
boolean isNull_1 = i.isNullAt(1);
int value_1 = isNull_1 ?
-1 : (i.getInt(1));
if (isNull_1) {
mutableStateArray_0[0].setNullAt(1);
} else {
mutableStateArray_0[0].write(1, value_1);
}
boolean isNull_2 = i.isNullAt(3);
int value_2 = isNull_2 ?
-1 : (i.getInt(3));
if (isNull_2) {
mutableStateArray_0[0].setNullAt(2);
} else {
mutableStateArray_0[0].write(2, value_2);
}
return (mutableStateArray_0[0].getRow());
}
}
这两种apply方法都有编译错误。
public UnsafeRow apply(InternalRow i) {
没问题,它实现了父类UnsafeProjection
中定义的抽象方法
由于UnsafeProjection's parent class Projection extends from InternalRow=>InternalRow
是Function1
类型,因此生成的代码必须提供另一个应用方法实现:
public java.lang.Object apply(java.lang.Object row) {
return apply((InternalRow) row);
}
上述申请方法导致编译错误,抱怨
apply(Object) in org.apache.spark.sql.SpecificUnsafeProjection clashed with apply(T1) in scala.Function1, both has some same erasure, but neight overrides each other
如果我从
更改apply方法,它将无效 public java.lang.Object apply(java.lang.Object row) {
到
public Internal apply(Internal row) {
因为它与其他apply方法冲突:
public UnsafeRow apply(InternalRow i) {
有人可以帮忙解决这个问题吗?