Hadoop - 按前缀聚合

时间:2015-12-05 04:29:07

标签: java hadoop mapreduce hadoop2

我有带前缀的单词。例如:

city|new york
city|London
travel|yes
...
city|new york

我想计算city|new yorkcity|London(这是经典的wordcount)的数量。但是,reducer输出应该是像city:{"new york" :2, "london":1}这样的键值对。每个city前缀的含义,我想聚合所有字符串及其计数。

public void reduce(Text key, Iterable<IntWritable> values,
               Context context
               ) throws IOException, InterruptedException {
  int sum = 0;
  for (IntWritable val : values) {
    sum += val.get();
  }
  result.set(sum);
  // Instead of just result count, I need something like {"city":{"new york" :2, "london":1}}
  context.write(key, result);
}

有什么想法吗?

2 个答案:

答案 0 :(得分:1)

您可以使用reducer的cleanup()方法来实现此目的(假设您只有一个reducer)。它在reduce任务结束时调用一次。

我将为&#34; city&#34;数据

以下是代码:

package com.hadooptests;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

public class Cities {

    public static class CityMapper
            extends Mapper<LongWritable, Text, Text, IntWritable> {

        private Text outKey = new Text();
        private IntWritable outValue = new IntWritable(1);

        public void map(LongWritable key, Text value, Context context
        ) throws IOException, InterruptedException {
              outKey.set(value);
              context.write(outKey, outValue);
        }
    }

    public static class CityReducer
            extends Reducer<Text,IntWritable,Text,Text> {

        HashMap<String, Integer> cityCount = new HashMap<String, Integer>();

        public void reduce(Text key, Iterable<IntWritable>values,
                           Context context
        ) throws IOException, InterruptedException {

            for (IntWritable val : values) {
                String keyStr = key.toString();
                if(keyStr.toLowerCase().startsWith("city|")) {
                    String[] tokens = keyStr.split("\\|");

                    if(cityCount.containsKey(tokens[1])) {
                        int count = cityCount.get(tokens[1]);
                        cityCount.put(tokens[1], ++count);
                    }
                    else
                        cityCount.put(tokens[1], val.get());
                }
            }
        }

        @Override
        public void cleanup(org.apache.hadoop.mapreduce.Reducer.Context context)
                throws IOException,
                InterruptedException
        {
            String output = "{\"city\":{";
            Iterator iterator = cityCount.entrySet().iterator();
            while(iterator.hasNext())
            {
                Map.Entry entry = (Map.Entry) iterator.next();
                output = output.concat("\"" + entry.getKey() + "\":" + Integer.toString((Integer) entry.getValue()) + ", ");
            }

            output = output.substring(0, output.length() - 2);
            output = output.concat("}}");
            context.write(output, "");
        }
    }


    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "KeyValue");

        job.setJarByClass(Cities.class);
        job.setMapperClass(CityMapper.class);
        job.setReducerClass(CityReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        FileInputFormat.addInputPath(job, new Path("/in/in.txt"));
        FileOutputFormat.setOutputPath(job, new Path("/out/"));

        System.exit(job.waitForCompletion(true) ? 0:1);

    }
}

<强>映射器:

  1. 它只输出遇到的每个键的计数。对于例如如果它遇到记录&#34; city | new york&#34; ,那么它会输出(键,值)作为(&#34; city |纽约&#34;,1)
  2. <强>减速机:

    1. 对于每条记录,它会检查密钥是否包含&#34; city |&#34; 。它将管道上的键分开(&#34; |&#34;)。并在HashMap中存储每个城市的计数。
    2. Reducer也会覆盖cleanup方法。一旦reduce任务结束,就会调用此方法。在此任务中,HashMap的内容组成所需的输出。
    3. cleanup()中,键输出为HashMap的内容,值输出为空字符串。
    4. 例如我将以下数据作为输入:

      city|new york
      city|London
      city|new york
      city|new york
      city|Paris
      city|Paris
      

      我得到了以下输出:

      {"city":{"London":1, "new york":3, "Paris":2}}
      

答案 1 :(得分:1)

很简单。

  1. 使用&#34; city&#34;从mapper发出作为输出键,整个记录作为输出值。

  2. U会将城市划分为减速器中的单个组,然后作为另一个组旅行。

  3. 使用和哈希地图计算城市和旅行实例,将粮食减少到较低的水平。