MapReduce作业获取avro文件并输出序列文件时出错

时间:2015-04-10 20:20:55

标签: hadoop serialization mapreduce avro

我有一个mapreduce作业,它接受类型为T的avro文件,并且应该将表单对输出为序列文件。这个作业只有一个映射器,这里是映射器和驱动程序的代码:

映射器:

public class AvroReaderMapper extends Mapper<LongWritable, AvroValue<ContentPackage>, Text, Text> {


@Override   
public void map(LongWritable k, AvroValue<T> record,Context context) throws IOException, InterruptedException {

       //some processsing

    }    

}

驱动程序:

 public class SeqFileGenerator extends Configured implements Tool {

public static void main(String[] args) throws Exception {

    int res = ToolRunner.run(new Configuration(), new SeqFileGenerator(), args);
    System.exit(res);

}

@Override
public int run(String[] arg0) throws Exception {

//Job configuration
    Configuration conf = new Configuration();
    Job job = new Job(getConf());
    job.setJarByClass(SeqFileGenerator.class);
    job.setJobName("Sequence File Generator");

    //1-set the input and output path
    FileInputFormat.setInputPaths(job, new Path("in"));
    FileOutputFormat.setOutputPath(job, new Path("out"));

    //2-set the mapper and reducer class        
    job.setMapperClass(AvroReaderMapper.class);

    //3-set the input/output format
    AvroJob.setInputValueSchema(job, ContentPackage.SCHEMA$);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);        
    job.setOutputFormatClass(SequenceFileOutputFormat.class);               

    //4-run the job
    job.waitForCompletion(true);

    return 0;
}   

}

运行时,它会给我以下错误消息:

java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.avro.mapred.AvroValue
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)

我该如何解决?

1 个答案:

答案 0 :(得分:0)

您必须为驱动程序类中的作业设置正确的输入格式。它默认采用TextInputformat。

尝试在驱动程序类中添加以下行

job.setInputFormatClass(AvroKeyInputFormat.class);