错误减少输出 - MapReduce作业

时间:2015-11-15 15:26:49

标签: java hadoop mapreduce reduce mapper

首先,我是java的初学者,但我必须尽快完成MapReduce Job的任务。

我试图修改wordcount算法,因为问题非常相似。

我的输入是一个文本文件,其中包含一列数据:

double**

MapReduce作业必须设置每行的第一个字符串,如我的键(日期:2008-10-23Hour:03User:001)和数字1或0,如值。 reducer的任务只是对同一个键的值(1 + 1 + 0 + 1 + 0 ...)求和......这就是全部。 问题是,在结果中,我得到的数字(太大)像最终值一样,但我不能完全知道原因。

这是算法:

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 0

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 0

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 0

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:02User:000 1

Date:2008-10-23Hour:03User:000 0

Date:2008-10-23Hour:03User:000 1

Date:2008-10-23Hour:03User:000 1

Date:2008-10-23Hour:03User:000 0

Date:2008-10-23Hour:03User:000 1

Date:2008-10-23Hour:03User:000 1 

Date:2008-10-23Hour:03User:000 0

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 0

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 0

Date:2008-10-23Hour:04User:000 1

Date:2008-10-23Hour:04User:000 0

Date:2008-10-23Hour:04User:000 1

这些都是错误的输出:

日期:2008-10-23hour:02User:000 16

日期:2008-10-23Hour:03User:000 6

日期:2008-10-23hour:04User:000 14

正确的输出应该是:

日期:2008-10-23hour:02User:000 8

日期:2008-10-23Hour:03User:000 3

日期:2008-10-23Hour:04User:000 7

错误的结果恰好是正确结果的两倍

此外,如果我在计算过程中使用值(0或1)打印总和和键,我会得到:

import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class MapReduce {


  public static class KeyValueMapper
       extends Mapper<Object, Object , Text, IntWritable>{

  private IntWritable ValueDistanceFunction = new IntWritable();  
  private Text DateHourUser = new Text();




 public void map(Object key, Object value, Context context
            ) throws IOException, InterruptedException {

 StringTokenizer itr = new StringTokenizer(value.toString());
 while (itr.hasMoreTokens()) {
     DateHourUser.set(read.nextToken());
     ValueDistanceFunction.set(Integer.parseInt(read.nextToken()));    
     context.write(DateHourUser,ValueDistanceFunction);
     // I print the results only to check them
     System.out.println(DateHourUser);
     System.out.println(ValueDistanceFunction);
 }


  }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
      private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable>values,
            Context context
            ) throws IOException, InterruptedException {    

int sum =0;

for (IntWritable val : values) {
  sum += val.get();
  System.out.println(sum);
}
result.set(sum);

context.write(key,result);
}
}



  public static void main(String[] args) throws Exception {

    Configuration conf = new Configuration();

    Job job = Job.getInstance(conf, "KeyValue");

    job.setJarByClass(MapReduce.class);

    job.setMapperClass(KeyValueMapper.class);

    job.setCombinerClass(IntSumReducer.class);

    job.setReducerClass(IntSumReducer.class);

    job.setOutputKeyClass(Text.class);

    job.setOutputValueClass(IntWritable.class);

    FileInputFormat.addInputPath(job, new Path("/home/ubuntu/workspace/FileGeneration/Input"));

    FileOutputFormat.setOutputPath(job, new Path("/home/ubuntu/workspace/FileGeneration/Output"));

    System.exit(job.waitForCompletion(true) ? 0:1);

  }
  }

提前多多感谢。

1 个答案:

答案 0 :(得分:1)

问题在于您的Mapper代码。你为什么要在映射器中读取输入?

以下行有问题:

BufferedReader sc=new BufferedReader(new FileReader("/home/ubuntu/workspace/FileGeneration/Input/Input"));
String line; 
while ((line=sc.readLine()) !=null){
    StringTokenizer read= new StringTokenizer (line," ");
    while (read.hasMoreTokens()){             

您已在Driver类中指定了输入。

FileInputFormat.addInputPath(job, new Path("/home/ubuntu/workspace/FileGeneration/Input"));

在Mapper中无需再次读取此输入。 Framework读取此文件并将每行传递给Mapper。该行包含在value

您的Mapper代码应如下所示:

 public void map(Object key, Object value, Context context
            ) throws IOException, InterruptedException {

     StringTokenizer itr = new StringTokenizer(value.toString());
     while (itr.hasMoreTokens()) {
         DateHourUser.set(read.nextToken());
         ValueDistanceFunction.set(Integer.parseInt(read.nextToken()));    
         context.write(DateHourUser,ValueDistanceFunction);
         // I print the results only to check them
         System.out.println(DateHourUser);
         System.out.println(ValueDistanceFunction);
     }
 }

修改 我拿走了你的数据并运行了程序。我得到了以下结果。我发现数据或代码没有问题:

E:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin>hadoop fs -cat /user/mballur/Output/part-r-00000
Date:2008-10-23Hour:02User:000  8
Date:2008-10-23Hour:03User:000  4
Date:2008-10-23Hour:04User:000  7

该计划没有问题。