我有一个简单的map-reduce程序,其中我的map和reduce原语看起来像这样
map(K,V)=(Text,OutputAggregator)
reduce(Text,OutputAggregator)=(文本,文本)
重要的一点是,从我的map函数中我发出了一个OutputAggregator类型的对象,它是我自己的类,它实现了Writable接口。但是,我的reduce因以下异常而失败。更具体地说,readFieds()函数抛出异常。有什么线索的原因?我用hadoop 0.18.3
10/09/19 04:04:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
10/09/19 04:04:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.JobClient: Running job: job_local_0001
10/09/19 04:04:59 INFO mapred.MapTask: numReduceTasks: 1
10/09/19 04:04:59 INFO mapred.MapTask: io.sort.mb = 100
10/09/19 04:04:59 INFO mapred.MapTask: data buffer = 79691776/99614720
10/09/19 04:04:59 INFO mapred.MapTask: record buffer = 262144/327680
Length = 10
10
10/09/19 04:04:59 INFO mapred.MapTask: Starting flush of map output
10/09/19 04:04:59 INFO mapred.MapTask: bufstart = 0; bufend = 231; bufvoid = 99614720
10/09/19 04:04:59 INFO mapred.MapTask: kvstart = 0; kvend = 10; length = 327680
gl_books
10/09/19 04:04:59 WARN mapred.LocalJobRunner: job_local_0001
java.lang.NullPointerException
at org.myorg.OutputAggregator.readFields(OutputAggregator.java:46)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:751)
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:691)
at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:770)
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:117)
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:1)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask.java:904)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:785)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:157)
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113)
at org.myorg.xxxParallelizer.main(xxxParallelizer.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
答案 0 :(得分:3)
发布有关自定义代码的问题时:发布相关的代码段。因此,第46行的内容和之前的几行。之后会真的有帮助...... :)
然而,这可能有所帮助:
编写自己的可写类时,陷阱是Hadoop一遍又一遍地重用该类的实际实例。在对readFields的调用之间,你没有得到一个闪亮的新实例。
因此,在readFields方法的开头,你必须假设你所在的对象充满了“垃圾”,必须在继续之前清除它。
我的建议是实现一个“clear()”方法,它完全擦除当前实例并将其重置为创建它和构造函数完成后的状态。当然,您将该方法称为readFields中第一个用于键和值的方法。
HTH
答案 1 :(得分:1)
除了Niels Basjes之外,回答:只需在空构造函数中初始化你的成员变量(你必须提供,否则Hadoop不能初始化你的对象),例如:
public OutputAggregator() {
this.member = new IntWritable();
...
}
假设this.member
的类型为IntWritable
。