将多个参数发送到reducer-MapReduce

时间:2013-01-25 05:41:10

标签: java hadoop mapreduce

我编写了一个与SQL GroupBy类似的代码。

我拍摄的数据集在这里:


250788681419,20090906,200937,200909,619,星期日,WEEKEND,ON-NET,早晨,OUTGOING,语音,25078,PAY_AS_YOU_GO_PER_SECOND_PSB,成功的-RELEASEDBYSERVICE,17,0,1,21.25,635-10-112-30455


public class MyMap extends Mapper<LongWritable, Text, Text, DoubleWritable> {

public void map(LongWritable key, Text value, Context context) throws IOException 
{

        String line = value.toString();
        String[] attribute=line.split(",");
        double rs=Double.parseDouble(attribute[17]);

        String comb=new String();
        comb=attribute[5].concat(attribute[8].concat(attribute[10]));

            context.write(new Text(comb),new DoubleWritable (rs));

    }
 } 
public class MyReduce extends Reducer<Text, DoubleWritable, Text, DoubleWritable> {

protected void reduce(Text key, Iterator<DoubleWritable> values, Context context) 
          throws IOException, InterruptedException {

             double sum = 0;
             Iterator<DoubleWritable> iter=values.iterator();
                while (iter.hasNext()) 
                {
                    double val=iter.next().get();
                    sum = sum+ val;
                }
                context.write(key, new DoubleWritable(sum));
        };
     }

在Mapper中,它的值将第17个参数发送给reducer以对其求和。现在我还要总结第14个论点如何将它发送到reducer?

1 个答案:

答案 0 :(得分:2)

如果您的数据类型相同,那么创建一个ArrayWritable类应该适用于此。该类应该类似于:

public class DblArrayWritable extends ArrayWritable 
{ 
    public DblArrayWritable() 
    { 
        super(DoubleWritable.class); 
    }
}

您的映射器类看起来像:

public class MyMap extends Mapper<LongWritable, Text, Text, DblArrayWritable> 
{
  public void map(LongWritable key, Text value, Context context) throws IOException 
  {

    String line = value.toString();
    String[] attribute=line.split(",");
    DoubleWritable[] values = new DoubleWritable[2];
    values[0] = Double.parseDouble(attribute[14]);
    values[1] = Double.parseDouble(attribute[17]);

    String comb=new String();
    comb=attribute[5].concat(attribute[8].concat(attribute[10]));

    context.write(new Text(comb),new DblArrayWritable.set(values));

  }
}

在你的reducer中,你现在应该能够遍历DblArrayWritable的值。

根据您的示例数据,它们看起来可能是不同的类型。你可能能够实现一个可以解决问题的ObjectArrayWritable类,但我不确定这一点,我看不到太多支持它。如果它有效,那么该课程将是:

public class ObjArrayWritable extends ArrayWritable 
{ 
    public ObjArrayWritable() 
    { 
        super(Object.class); 
    }
}

您可以通过简单地连接值并将它们作为Text传递给reducer来处理这个问题,然后再将它们拆分。

另一个选择是实现自己的Writable类。以下是一个如何运作的示例:

public static class PairWritable implements Writable 
{
   private Double myDouble;
   private String myString;

    // TODO :-  Override the Hadoop serialization/Writable interface methods
    @Override
    public void readFields(DataInput in) throws IOException {
            myLong = in.readDouble();
            myString = in.readUTF();
    }

    @Override
    public void write(DataOutput out) throws IOException {
            out.writeDouble(myLong);
            out.writeUTF(myString);
    }

    //End of Implementation

    //Getter and Setter methods for myLong and mySring variables
    public void set(Double d, String s) {
        myDouble = d;
        myString = s;
    }

    public Long getLong() {
        return myDouble;
    }
    public String getString() {
        return myString;
    }

}