错误 - MapReduce中的Hadoop字数统计程序

时间:2015-03-08 21:51:17

标签: java apache hadoop

我是Hadoop的新手,如果这看起来像愚蠢的问题那么请原谅我。

我正在运行我的MapReduce程序并收到以下错误:

java.lang.Exception:java.io.IOException:键入map中的键不匹配:expected org.apache.hadoop.io.Text,recieved org.apache.hadoop.io.LongWritable 在org.apache.hadoop.mapred.LocalJobRunner $ Job.run(LocalJobRunner.java:354) 引起:java.io.IOException:键入map中的键不匹配:expected org.apache.hadoop.io.Text,recieved org.apache.hadoop.io.LongWritable 在org.apache.hadoop.mapred.MapTask $ MapOutputBuffer.collect(MapTask.java:1019)

感谢任何帮助。

公共类WordCount {

// Mapper Class
public static class MapperClass extends Mapper<Object, Text, Text, IntWritable>{


    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    // Mapper method defined
    public void mapperMethod(Object key,Text lineContent,Context context){
        try{
        StringTokenizer strToken = new StringTokenizer(lineContent.toString());
        //Iterating through the line 
        while(strToken.hasMoreTokens()){
            word.set(strToken.nextToken());
            try{
            context.write(word, one);
            }
            catch(Exception e){
                System.err.println(new Date()+"  ---> Cannot write data to hadoop in Mapper.");
                e.printStackTrace();
            }
        }
    }
    catch(Exception ex){
        ex.printStackTrace();
    }
    }
}
// Reducer Class
public static class ReducerClass extends Reducer<Text, IntWritable, Text, IntWritable>{

    private IntWritable result = new IntWritable();

    //Reducer method
    public void reduce(Text key,Iterable<IntWritable> values,Context context){
        try{
        int sum=0;
        for(IntWritable itr : values){
            sum+=itr.get();
        }
        result.set(sum);
        try {
            context.write(key,result);
        } catch (Exception e) {
            System.err.println(new Date()+" ---> Error while sending data to Hadoop in Reducer");
            e.printStackTrace();
        }

    }
    catch (Exception err){
        err.printStackTrace();

    }
}

}


public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
    try{
    Configuration conf = new Configuration();
    String [] arguments = new GenericOptionsParser(conf, args).getRemainingArgs();
    if(arguments.length!=2){
        System.err.println("Enter both and input and output location.");
        System.exit(1);
    }
    Job job = new Job(conf,"Simple Word Count");

    job.setJarByClass(WordCount.class);
    job.setMapperClass(MapperClass.class);
    job.setReducerClass(ReducerClass.class);


    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);



    FileInputFormat.addInputPath(job, new Path(arguments[0]));
    FileOutputFormat.setOutputPath(job, new Path(arguments[1]));

    System.exit(job.waitForCompletion(true) ? 0 : 1);
}
    catch(Exception e){

    }
}

}

1 个答案:

答案 0 :(得分:0)

您需要在Mapper类中覆盖Map方法,而不是使用新方法。 由于您没有映射方法覆盖您的程序而导致您的错误归结为仅减少作业。 Reducer以LongWritable,Text的形式输入,但您已将Intwritable和text声明为输入。

希望这能解释。

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
         private final static IntWritable one = new IntWritable(1);
         private Text word = new Text();

         public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
       String line = value.toString();
       StringTokenizer tokenizer = new StringTokenizer(line);
           while (tokenizer.hasMoreTokens()) {
         word.set(tokenizer.nextToken());
             output.collect(word, one);
           }
         }
       }