我正在尝试使用Kryo将avro文件中的数据读入RDD。我的代码编译得很好,但在运行时我得到ClassCastException
。这是我的代码所做的:
SparkConf conf = new SparkConf()...
conf.set("spark.serializer", KryoSerializer.class.getCanonicalName());
conf.set("spark.kryo.registrator", MyKryoRegistrator.class.getName());
JavaSparkContext sc = new JavaSparkContext(conf);
MyKryoRegistrator
注册MyCustomClass
的序列化程序:
public void registerClasses(Kryo kryo) {
kryo.register(MyCustomClass.class, new MyCustomClassSerializer());
}
然后,我读了我的数据文件:
JavaPairRDD<MyCustomClass, NullWritable> records =
sc.newAPIHadoopFile("file:/path/to/datafile.avro",
AvroKeyInputFormat.class, MyCustomClass.class, NullWritable.class,
sc.hadoopConfiguration());
Tuple2<MyCustomClass, NullWritable> first = records.first();
这似乎工作正常,但是使用调试器我可以看到,虽然RDD有my.package.containing.MyCustomClass的kClassTag,但变量first
包含Tuple2<AvroKey, NullWritable>
,而不是{{ 1}}!事实上,当执行以下行时:
Tuple2<MyCustomClass, NullWritable>
我得到一个例外:
System.out.println("Got a result, custom field is: " + first._1.getSomeCustomField());
我做错了吗?即便如此,我不应该得到编译错误而不是运行时错误吗?
答案 0 :(得分:0)
************* EDIT **************
我设法从avro文件加载自定义对象,并使用代码创建了GitHub repository。但是,如果avro lib无法将数据加载到自定义类中,则会返回GenericData $ Record对象。在这种情况下,Spark Java API不会检查自定义类的赋值,这就是为什么在尝试访问AvroKey的数据时只会获得ClassCastException。这违反了数据安全保障。
************* EDIT **************
对于其他试图这样做的人,我有一个黑客可以解决这个问题,但这不是正确的解决方案:
我创建了一个从avro文件中读取GenericData.Record
的类:
public class GenericRecordFileInputFormat extends FileInputFormat<GenericData.Record, NullWritable> {
private static final Logger LOG = LoggerFactory.getLogger(GenericRecordFileInputFormat.class);
/**
* {@inheritDoc}
*/
@Override
public RecordReader<GenericData.Record, NullWritable> createRecordReader(
InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {
Schema readerSchema = AvroJob.getInputKeySchema(context.getConfiguration());
if (null == readerSchema) {
LOG.warn("Reader schema was not set. Use AvroJob.setInputKeySchema() if desired.");
LOG.info("Using a reader schema equal to the writer schema.");
}
return new GenericDataRecordReader(readerSchema);
}
public static class GenericDataRecordReader extends RecordReader<GenericData.Record, NullWritable> {
AvroKeyRecordReader<GenericData.Record> avroReader;
public GenericDataRecordReader(Schema readerSchema) {
super();
avroReader = new AvroKeyRecordReader<>(readerSchema);
}
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
avroReader.initialize(inputSplit, taskAttemptContext);
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
return avroReader.nextKeyValue();
}
@Override
public GenericData.Record getCurrentKey() throws IOException, InterruptedException {
AvroKey<GenericData.Record> currentKey = avroReader.getCurrentKey();
return currentKey.datum();
}
@Override
public NullWritable getCurrentValue() throws IOException, InterruptedException {
return avroReader.getCurrentValue();
}
@Override
public float getProgress() throws IOException, InterruptedException {
return avroReader.getProgress();
}
@Override
public void close() throws IOException {
avroReader.close();
}
}
}
然后我加载记录:
JavaRDD<GenericData.Record> records = sc.newAPIHadoopFile("file:/path/to/datafile.avro",
GenericRecordFileInputFormat.class, GenericData.Record.class, NullWritable.class,
sc.hadoopConfiguration()).keys();
然后我使用接受GenericData.Record
的构造函数将记录转换为我的自定义类。
再次 - 不漂亮,但有效。