运行Mapreduce程序时出错

时间:2011-08-18 07:18:53

标签: hadoop

运行Map-reduce程序时出现以下错误。

The program is to sort the o/p using TotalOrderpartition.

I have 2 node cluster. 
when i run teh program with -D mapred.reduce.tasks=2 its working fine
 But its failing with below error while running with -D mapred.reduce.tasks=3 option.


java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:448)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
        ... 6 more
Caused by: java.lang.IllegalArgumentException: Can't read partitions file
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:91)
        ... 11 more
Caused by: java.io.IOException: Split points are out of order
        at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:78)
        ... 11 more

Plese let me know whats wrong here?

Thanks
R

3 个答案:

答案 0 :(得分:2)

可以提及的最大Reducer数等于集群中的节点数。由于此处的节点数为2,因此无法将reducer的数量设置为大于2。

答案 1 :(得分:1)

听起来你的分区文件中没有足够的密钥。 docs表示TotalOrderpartitioner要求您的分区SequenceFile中至少有N-1个密钥,其中N是reducer的数量。

答案 2 :(得分:0)

我也遇到了这个问题,通过检查soucecode发现因为样本,增加减少的数量使得在分裂点上有相同的元素,所以抛出这个错误。它与数据有关。类型hadoop fs - 文本_partition查看生成分区的文件,如果你的任务失败则必须有相同的元素。