fetcher中shuffle中的Hadoop错误:超过MAX_FAILED_UNIQUE_FETCHES

时间:2014-06-05 17:07:11

标签: hadoop mapreduce

我是hadoop的新手。我在虚拟盒子上设置了kerberos安全启用的hadoop集群(主设备和1个从设备)。我试图从hadoop示例'pi'中找到一份工作。作业终止,错误超过MAX_FAILED_UNIQUE_FETCHES。我试图搜索这个错误,但在互联网上给出的解决方案似乎并不适合我。也许我错过了一些明显的东西。我甚至尝试从etc / hadoop / slaves文件中删除slave,看看作业是否只能在master上运行,但是同样的错误也会失败。以下是日志。我在64位Ubuntu 14.04虚拟机上运行它。任何帮助表示赞赏。

montauk@montauk-vmaster:/usr/local/hadoop$ sudo -u yarn bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar pi 2 10
Number of Maps  = 2
Samples per Map = 10
OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/05 12:04:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/06/05 12:04:49 INFO client.RMProxy: Connecting to ResourceManager at /192.168.0.29:8040
14/06/05 12:04:50 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 17 for yarn on 192.168.0.29:54310
14/06/05 12:04:50 INFO security.TokenCache: Got dt for hdfs://192.168.0.29:54310; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:50 INFO input.FileInputFormat: Total input paths to process : 2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: number of splits:2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1401975262053_0007
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:53 INFO impl.YarnClientImpl: Submitted application application_1401975262053_0007
14/06/05 12:04:53 INFO mapreduce.Job: The url to track the job: http://montauk-vmaster:8088/proxy/application_1401975262053_0007/
14/06/05 12:04:53 INFO mapreduce.Job: Running job: job_1401975262053_0007
14/06/05 12:05:29 INFO mapreduce.Job: Job job_1401975262053_0007 running in uber mode : false
14/06/05 12:05:29 INFO mapreduce.Job:  map 0% reduce 0%
14/06/05 12:06:04 INFO mapreduce.Job:  map 50% reduce 0%
14/06/05 12:06:06 INFO mapreduce.Job:  map 100% reduce 0%
14/06/05 12:06:34 INFO mapreduce.Job:  map 100% reduce 100%
14/06/05 12:06:34 INFO mapreduce.Job: Task Id : attempt_1401975262053_0007_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
    at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:323)
    at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:245)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:347)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)

3 个答案:

答案 0 :(得分:4)

当我使用tarball安装带有kerberos安全性的cdh5.1.0时,我遇到了与你相同的问题,google发现的解决方案内存不足,但我不认为这是我的情况,因为我输入的是非常小(52K)。

经过几天的挖掘,我在this link找到了根本原因。

总结该链接中的解决方案可以是:

  1. 在yarn-site.xml中添加以下属性,即使它在yarn-default.xml中是默认的

    <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property>

  2. 删除属性 yarn.nodemanager.local-dirs 并使用默认值/tmp.Then exec以下命令:

    mkdir -p /tmp/hadoop-yarn/nm-local-dir chown yarn:yarn /tmp/hadoop-yarn/nm-local-dir

  3. 问题可以归结为:

    设置 yarn.nodemanager.local-dirs属性后,yarn-default.xml中的属性 yarn.nodemanager.aux-services.mapreduce_shuffle.class 不会&#39工作。

    我还没找到根本原因。

答案 1 :(得分:0)

我有同样的问题。我有mapreduce工作没有reducer。然后我用job.setNumReduceTasks(0);

解决了它

答案 2 :(得分:0)

  1. 更改yarn-site.xml中的属性并创建目录。

    yarn.nodemanager.local-dirs / tmp

    mkdir -p / tmp / hadoop-yarn / nm-local-dir 粗纱:纱/ tmp / hadoop-yarn / nm-local-dir

  2. 调整mapred-site.xml中的资源属性

    mapreduce.reduce.shuffle.input.buffer.percent = 0.50 mapreduce.reduce.shuffle.memory.limit.percent = 0.2 mapreduce.reduce.shuffle.parallelcopies = 4

  3. 在各自的节点上重新启动resourcemanager和nodemanager。