mapreduce计数的例子

时间:2011-05-28 20:40:37

标签: java hadoop mapreduce

我的问题是mapreduce programming in java

假设我有WordCount.java示例,标准mapreduce program。我希望map函数收集一些信息,并返回到形成的reduce函数映射:<slaveNode_id,some_info_collected>

这样I can know what slave node collected what data ..任何想法如何?

public class WordCount {

    public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
      private final static IntWritable one = new IntWritable(1);
      private Text word = new Text();

      public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
          word.set(tokenizer.nextToken());
          output.collect(word, one);
        }
      }
    }

    public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
      public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
        int sum = 0;
        while (values.hasNext()) {
          sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
      }
    }

    public static void main(String[] args) throws Exception {
      JobConf conf = new JobConf(WordCount.class);
      conf.setJobName("wordcount");

      conf.setOutputKeyClass(Text.class);
      conf.setOutputValueClass(IntWritable.class);

      conf.setMapperClass(Map.class);
      conf.setCombinerClass(Reduce.class);
      conf.setReducerClass(Reduce.class);

      conf.setInputFormat(TextInputFormat.class);
      conf.setOutputFormat(TextOutputFormat.class);

      FileInputFormat.setInputPaths(conf, new Path(args[0]));
      FileOutputFormat.setOutputPath(conf, new Path(args[1]));

      JobClient.runJob(conf);
    }
}

谢谢!

2 个答案:

答案 0 :(得分:5)

您要问的是让应用程序(您的map-reduce thingy)知道它运行的基础架构。

一般来说,答案是您的应用程序不需要此信息。每次调用Mapper和每次调用Reducer都可以在不同的节点上执行,也可以在同一节点上执行。 MapReduce的优点在于结果是相同的,因此对于您的应用程序:无所谓。

因此,API没有支持您的此请求的功能。

有乐趣学习Hadoop:)


P.S。我能想到的唯一方法(至少可以说是令人讨厌的)是你在Mapper中包含某种类型的系统调用,并向底层操作系统询问它的名称/属性/等。 这种结构会使你的应用程序非常不便携;即它不会在Windows或亚马逊上的Hadoop上运行。

答案 1 :(得分:1)

Wordcount是一个错误的例子。您想简单地将所有信息合并在一起。这反转了对wordcount的事情。

基本上你只是将slaveNode_id作为IntWritable(如果可能的话)发布,信息发布为Text

  public static class Map extends MapReduceBase implements Mapper<LongWritable, Text,IntWritable, Text> {
    private Text word = new Text();

  public void map(LongWritable key, Text value, OutputCollector<IntWritable, Text> output, Reporter reporter) throws IOException {
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
      word.set(tokenizer.nextToken());
      // you have to split your data here: ID and value
      IntWritable id = new IntWritable(YOUR_ID_HERE);

      output.collect(id, word);
    }
  }
}

减速机会采用相同的方式:

 public static class Reduce extends MapReduceBase implements Reducer<IntWritable, Text,IntWritable, Text> {
  public void reduce(IntWritable key, Iterator<Text> values, OutputCollector<IntWritable,Text> output, Reporter reporter) throws IOException {

      // now you have all the values for a slaveID as key. Do whatever you like with that...
      for(Text value : values)
         output.collect(key, value)
  }
}