在Hadoop中实现自定义Writable?

时间:2013-01-10 00:54:55

标签: serialization hadoop mapreduce

我在Hadoop中定义了一个自定义的Writable类,但Hadoop在运行我的程序时给出了以下错误消息。

java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at org.apache.hadoop.io.SortedMapWritable.readFields(SortedMapWritable.java:180)
at EquivalenceClsAggValue.readFields(EquivalenceClsAggValue.java:82)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1282)
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1222)
at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:1301)
at Mondrian$Combine.reduce(Mondrian.java:119)
at Mondrian$Combine.reduce(Mondrian.java:1)
at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1442)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1436)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1298)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

引起:java.lang.NullPointerException     在java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:332)....

EquivalenceClsAggValue是我定义的Writable类的名称,这是我的类:

public class EquivalenceClsAggValue implements WritableComparable<EquivalenceClsAggValue>{

public ArrayList<SortedMapWritable> aggValues;  
public EquivalenceClsAggValue(){        

    aggValues = new ArrayList<SortedMapWritable>();
}
@Override
public void readFields(DataInput arg0) throws IOException {

    int size = arg0.readInt();

    for (int i=0;i<size;i++){
        SortedMapWritable tmp = new SortedMapWritable();
        tmp.readFields(arg0);
        aggValues.add(tmp);
    }       
}

@Override
public void write(DataOutput arg0) throws IOException {

    //write the size first
    arg0.write(aggValues.size());

    //write each element
    for (SortedMapWritable s:aggValues){
        s.write(arg0);
    }

}

我想知道问题的根源是什么。

1 个答案:

答案 0 :(得分:5)

您的write(DataOutput)方法中出现错误:

@Override
public void write(DataOutput arg0) throws IOException {
  //write the size first
  // arg0.write(aggValues.size()); // here you're writing an int as a byte

  // try this instead:
  arg0.writeInt(aggValues.size()); // actually write int as an int

  //..

查看DataOutput.write(int) vs DataOutput.writeInt(int)

的API文档

我还会修改你在readFields中创建SortedMapWritable tmp局部变量以使用ReflectionUtils.newInstance()

@Override
public void readFields(DataInput arg0) throws IOException {

  int size = arg0.readInt();

  for (int i=0;i<size;i++){
    SortedMapWritable tmp = ReflectionUtils.newInstance(
        SortedMapWritable.class, getConf());
    tmp.readFields(arg0);
    aggValues.add(tmp);
  }       
}

请注意,要使用此功能,您还需要修改类签名以扩展Configurable(这样在最初创建对象时Hadoop将注入Configuration对象:

public class EquivalenceClsAggValue 
          extends Configured 
          implements WritableComparable<EquivalenceClsAggValue> {