MapReduce Reducer's KeyOut Type

时间:2019-04-17 02:09:52

标签: hadoop mapreduce

I wrote the Map and Reduce program where Reducer's Output key and value are different that it's input or Mapper's output. I made the appropriate changes in Driver's class. Here is the exception I get while running it:

INFO mapreduce.Job: Task Id : attempt_1550670375771_4211_m_000003_2, Status : FAILED Error: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.FloatWritable at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1084) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:721) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at com.hirw.maxcloseprice.MyHadoopMapper.map(MyHadoopMapper.java:20) at com.hirw.maxcloseprice.MyHadoopMapper.map(MyHadoopMapper.java:1) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

19/04/16 22:24:50 INFO mapreduce.Job: map 100% reduce 100% 19/04/16 22:24:50 INFO mapreduce.Job: Job job_1550670375771_4211 failed with state FAILED due to: Task failed task_1550670375771_4211_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0

It works when the KeyOut and ValueOut of Reducer are same as that of Mapper but fails when they are different.

My Mapper class: public class MyHadoopMapper extends Mapper{

@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

    String[] recordItems = value.toString().split(",");

    String stock = recordItems[1];
    Float stockValue = Float.parseFloat(recordItems[6]);

    context.write(new Text(stock), new FloatWritable(stockValue));
}

}

The Reducer class:

public class MyHadoopReducer extends Reducer {

@Override
public void reduce(Text key, Iterable<FloatWritable> values, Context context
        ) throws IOException, InterruptedException {

    Float maxVal = Float.MIN_VALUE;
    for (FloatWritable stockValue : values) {
        maxVal = stockValue.get() > maxVal ? stockValue.get() : maxVal;
    }

    context.write(key, new Text(String.valueOf(maxVal)));
}

}

And the Driver class: public class MyHadoopDriver {

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
    // TODO Auto-generated method stub

    Job hadoopJob = new Job();
    hadoopJob.setJarByClass(MyHadoopDriver.class);
    hadoopJob.setJobName("MyStockPrice");

    FileInputFormat.addInputPath(hadoopJob, new Path("/user/hirw/input/stocks"));
    FileOutputFormat.setOutputPath(hadoopJob, new Path("stocksData"));

    hadoopJob.setInputFormatClass(TextInputFormat.class);
    hadoopJob.setOutputFormatClass(TextOutputFormat.class);

    hadoopJob.setMapperClass(MyHadoopMapper.class);
    hadoopJob.setReducerClass(MyHadoopReducer.class);

    hadoopJob.setCombinerClass(MyHadoopReducer.class);

    hadoopJob.setOutputKeyClass(Text.class);
    hadoopJob.setOutputValueClass(Text.class);

    System.exit(hadoopJob.waitForCompletion(true) ? 0: 1);
}

}

2 个答案:

答案 0 :(得分:0)

默认情况下,在使用FloatWritable时,映射器输出类型为Text。那就是异常告诉你的。您需要像这样指定映射器输出类型:

job.setMapOutputValueClass(FloatWritable.class)

答案 1 :(得分:0)

取出组合器或使用匹配的键和输出编写新的文件。