亚马逊Hadoop 2.4 + Avro 1.77:发现接口org.apache.hadoop.mapreduce.TaskAttemptContext,但是类是预期的

时间:2015-05-08 03:08:36

标签: java hadoop amazon-s3 avro amazon-emr

我正在尝试在EMR上运行以下代码,并且它给出了上述异常。有谁知道可能会出错?我正在使用avro-tools-1.77来编译我的模式。

经过一些研究后,我开始觉得它可能是一个avro问题,可以通过使用Maven编辑和编辑依赖关系或者将amazon hadoop版本更改为某些先前版本来修复。但是,我从未使用过Maven并且更改hadoop版本会使我的其他代码变得混乱。

public class MapReduceIndexing extends Configured implements Tool{
static int number_of_documents;
static DynamoStorage ds = new DynamoStorage();  

public static class IndexMapper extends Mapper<AvroKey<DocumentSchema>, NullWritable, Text, IndexValue>{
    public void map(AvroKey<DocumentSchema> key, NullWritable value, Context context) throws IOException, InterruptedException {

        System.out.println("inside map start");

        //some mapper code e.g.
        for(String word : all_words.keySet()){
            context.write(new Text(word), iv);              
        }
        System.out.println("inside map end");
    }
}


public static class IndexReducer extends Reducer<Text, IndexValue, AvroKey<CharSequence>, AvroValue<Integer>> {

    @Override
    public void reduce(Text key, Iterable<IndexValue> iterable_values, Context context) throws IOException, InterruptedException {
        System.out.println("inside reduce start");
        //some reducer code
        System.out.println("inside reduce end");
    }
}


public int run(String[] args) throws Exception {        
    Configuration conf = new Configuration();       
    Job job = new Job(conf, "indexing");
    job.setJarByClass(MapReduceIndexing.class);
    job.setJobName("Making inverted index");

    FileInputFormat.setInputPaths(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    job.setInputFormatClass(AvroKeyInputFormat.class);
    job.setMapperClass(IndexMapper.class);
    AvroJob.setInputKeySchema(job, DocumentSchema.getClassSchema());
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IndexValue.class);

    job.setOutputFormatClass(AvroKeyValueOutputFormat.class);
    job.setReducerClass(IndexReducer.class);
    AvroJob.setOutputKeySchema(job, Schema.create(Schema.Type.STRING));
    AvroJob.setOutputValueSchema(job, Schema.create(Schema.Type.INT));

    return (job.waitForCompletion(true) ? 0 : 1);
}


public static void main(String[] args) throws Exception {
    //setting input and output directories

    AWSCredentials credentials = new BasicAWSCredentials("access key", "secret key");
    AmazonS3 s3 = new AmazonS3Client(credentials);      
    ObjectListing object_listing = s3.listObjects(new ListObjectsRequest().withBucketName(args[2]));
    number_of_documents = object_listing.getObjectSummaries().size();

    int res = ToolRunner.run(new MapReduceIndexing(), args);
    System.exit(res);
}}

1 个答案:

答案 0 :(得分:0)

检查avro-tools是否在您的编译类路径中。它包含org.apache.hadoop.mapreduce.TaskAttemptContext,可能与您的jar和/或群集中的版本冲突。如果您出于某种原因需要添加avro-tools,则必须下载针对您的Hadoop版本编译的版本(Cloudera在repository中有此版本,但我&#39 ; m不确定EMR的位置,或者自己编译avro-tools。