Hadoop减少了先前值的连接当前值

时间:2012-03-28 22:26:28

标签: hadoop reduce

我有这个减少功能:

protected void reduce(Text key, Iterable<SortedMapWritable> values, Context context) throws IOException, InterruptedException {
    StringBuilder strOutput = new StringBuilder();
    double sum = 0, i = 0;
    DoubleWritable val = null;

    SortedMapWritable tmp = values.iterator().next();
    strOutput.append("[");
    Set<WritableComparable> keys = tmp.keySet();
    for (WritableComparable mapKey : keys) {                    
        val = (DoubleWritable)tmp.get(mapKey);
        sum += val.get();
        if(i > 0)
            strOutput.append(",");
        strOutput.append(val.get());
        i++;
    }
    strOutput.append("]");

    context.write(new Text(key.toString()), new Text(strOutput.toString()));
    context.write(new Text(key.toString() + "Med"), new Text(Double.toString(sum/i)));
}

作为SortedMapWritable,我使用了<LongWritable,DoubleWritable>,正如我们在此代码中看到的那样

    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    final Context ctx = context;
    Configuration conf = new Configuration();
    FileSystem hdfs = FileSystem.get(conf); 
    Path srcPath = new Path(hdfs.getWorkingDirectory() + "/" + value);  
    Path dstPath = new Path("/tmp/");       

    hdfs.copyToLocalFile(srcPath, dstPath);

    final StringBuilder errbuf = new StringBuilder();
    final Pcap pcap = Pcap.openOffline(dstPath.toString() + "/" +value, errbuf);
    if (pcap == null) {
        throw new InterruptedException("Impossible create PCAP file");
    }

    final HashMap<Integer,JxtaSocketFlow> dataFlows = new HashMap<Integer,JxtaSocketFlow>();
    final HashMap<Integer,JxtaSocketFlow> ackFlows = new HashMap<Integer,JxtaSocketFlow>();

    generateHalfSocketFlows(errbuf, pcap, dataFlows, ackFlows);
    final Text jxtaPayloadKey = new Text("JXTA_Payload");
    final Text jxtaRelyRtt = new Text("JXTA_Reliability_RTT");

    SortedMapWritable payOutput = new SortedMapWritable();
    SortedMapWritable rttOutput = new SortedMapWritable();

    for (Integer dataFlowKey : dataFlows.keySet()) {
        JxtaSocketFlow dataFlow = dataFlows.get(dataFlowKey);
        JxtaSocketStatistics stats = dataFlow.getJxtaSocketStatistics();

        payOutput.put(new LongWritable(stats.getEndTime()), new DoubleWritable((stats.getPayload())/1024));         
        HashMap<Integer,Long> rtts = stats.getRtts();
        for (Integer num : rtts.keySet()) {
            LongWritable key = new LongWritable(stats.getEndTime() + num);                                                      
            rttOutput.put(key, new DoubleWritable(rtts.get(num)));
        }
    }

    try{
        ctx.write(jxtaPayloadKey, payOutput);
        ctx.write(jxtaRelyRtt, rttOutput);
    }catch(IOException e){
        e.printStackTrace();
    }catch(InterruptedException e){
        e.printStackTrace();
    }
}

在reduce函数上,对于每个键,该值已与先前的值连接。

例如,以正确的方式,键和值应为:

key1 - &gt; {a,b,c} key2 - &gt; {d,e,f}

但值已经

key1 - &gt; {a,b,c} key2 - &gt; {a,b,c,d,e,f}

有谁知道为什么会这样?我怎么能避免这种情况?

2 个答案:

答案 0 :(得分:3)

hadoop https://issues.apache.org/jira/browse/HADOOP-5454存在一个漏洞,可能会解释您遇到的问题。

在下面的代码中,需要使用row.clear()来防止从一次迭代追加到下一次迭代的值。

@Log4j
public class StackOverFlowReducer extends Reducer
{
    public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException
    {
        for (SortedMapWritable row : values)
        {
            log.info(String.format("New Map : %s", Joiner.on(",").join(row.entrySet())));
            row.clear();//https://issues.apache.org/jira/browse/HADOOP-5454
        }
    }
}

我只在一个密钥中测试了解决方法。 我希望它有所帮助。

答案 1 :(得分:0)

此问题与您为reducer提供的代码无关,但我建议重构其变量名称以便更容易理解。

我们只能推断您的 Mapper 正在为每个当前密钥传递这些重复值。这似乎是导致您重复的原因。