hadoop-streaming:处于暂挂状态的reducer,没有启动?

时间:2011-10-31 16:33:25

标签: python hadoop mapreduce hadoop-streaming

我有一个地图缩减工作,运行正常,直到我开始看到一些失败的地图任务,如

attempt_201110302152_0003_m_000010_0    task_201110302152_0003_m_000010 worker1 FAILED  
Task attempt_201110302152_0003_m_000010_0 failed to report status for 602 seconds. Killing!
-------
Task attempt_201110302152_0003_m_000010_0 failed to report status for 607 seconds. Killing!
Last 4KB
Last 8KB
All
attempt_201110302152_0003_m_000010_1    task_201110302152_0003_m_000010 master  FAILED  
java.lang.RuntimeException: java.io.IOException: Spill failed
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:261)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:255)
Caused by: java.io.IOException: Spill failed
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1029)
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:592)
    at org.apache.hadoop.streaming.PipeMapRed$MROutputThread.run(PipeMapRed.java:381)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/spill11.out
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:381)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
    at org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:121)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1392)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$1800(MapTask.java:853)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1344)
Last 4KB
Last 8KB
All

现在 reducer没有开始执行而早期的reducer用于开始复制数据,即使在map任务正在运行时,我看到的只是这个

11/10/31 03:35:12 INFO streaming.StreamJob:  map 95%  reduce 0%
11/10/31 03:44:01 INFO streaming.StreamJob:  map 96%  reduce 0%
11/10/31 03:51:56 INFO streaming.StreamJob:  map 97%  reduce 0%
11/10/31 03:55:41 INFO streaming.StreamJob:  map 98%  reduce 0%
11/10/31 04:04:18 INFO streaming.StreamJob:  map 99%  reduce 0%
11/10/31 04:20:32 INFO streaming.StreamJob:  map 100%  reduce 0%

我是hadoopmapreduce的新手,并不知道是什么原因可能会导致相同的代码失败并且之前成功运行

请帮忙

谢谢

2 个答案:

答案 0 :(得分:1)

你应该看看mapred.task.timeout。如果您有大量数据和很少的机器来处理它,您的任务可能会超时。您可以将此值设置为0,以禁用此超时。

或者,如果您可以调用context.progress或某些等效函数来表示发生了某些事情,那么作业就不会超时。

答案 1 :(得分:0)

我遇到了同样的问题,我做了两件事来解决它:

第一种是压缩映射器的输出,使用mapred.output.compress=true。当您的映射器运行时,输出将溢出到磁盘(写入磁盘),有时需要将输出发送到另一台计算机上的reducer。压缩输出将减少网络流量,并减少运行映射器的计算机上所需的磁盘量。

我做的第二件事是增加hdfs和mapred用户的ulimits。我将这些行添加到/etc/security/limits.conf

mapred      soft    nproc       16384
mapred      soft    nofile      16384
hdfs        soft    nproc       16384
hdfs        soft    nofile      16384
hbase       soft    nproc       16384
hbase       soft    nofile      16384

这篇文章有一个更全面的解释:http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/