Hadoop节点在一段时间后死亡(崩溃)

时间:2014-01-31 10:43:06

标签: networking ubuntu hadoop cluster-computing

我有一个16个(ubuntu 12.04服务器)节点的hadoop集群(1个主节点和15个从节点)。它们通过专用网络连接,主设备也有公共IP(它属于两个网络)。 当我运行小任务时,即输入量小,处理时间短,一切正常。 但是,当我运行更大的任务,即使用7-8 GB的输入数据时,我的从属节点开始一个接一个地死亡。

从网络ui(http://master:50070/dfsnodelist.jsp?whatNodes=LIVE)我看到最后一次联系开始增加,从我的集群提供商的web ui,我看到节点已崩溃。这是节点的屏幕截图(我无法向上滚动):

enter image description here

另一台机器显示此错误,运行hadoop dfs,而没有正在运行的作业:

BUG: soft lockup - CPU#7 stuck for 27s! [java:4072]

BUG: soft lockup - CPU#5 stuck for 41s! [java:3309]
ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in
         res 40/00:02:00:08:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
ata2.00: status: { DRDY }

这是另一个截图(其中我没有任何意义):

enter image description here

以下是崩溃的datanode(IP 192.168.0.9)的日志:

2014-02-01 15:17:34,874 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving blk_-2375077065158517857_1818 src: /192.168.0.7:53632 dest: /192.168.0.9:50010
2014-02-01 15:20:14,187 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for blk_-2375077065158517857_1818 java.io.EOFException: while trying to read 65557 bytes
2014-02-01 15:20:17,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_-2375077065158517857_1818 0 : Thread is interrupted.
2014-02-01 15:20:17,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for blk_-2375077065158517857_1818 terminating
2014-02-01 15:20:17,557 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-2375077065158517857_1818 received exception java.io.EOFException: while trying to read 65557 bytes
2014-02-01 15:20:17,560 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:296)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:340)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:404)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:582)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:404)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
    at java.lang.Thread.run(Thread.java:744)
2014-02-01 15:21:48,350 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.19:60853, bytes: 132096, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000018_0_1657459557_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-6962923875569811947_1279, duration: 276262265702
2014-02-01 15:21:56,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.19:60849, bytes: 792576, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000013_0_1311506552_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_4630218397829850426_1316, duration: 289841363522
2014-02-01 15:23:46,614 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call getProtocolVersion(org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol, 3) from 192.168.0.19:48460: output error
2014-02-01 15:23:46,617 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020 caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
    at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1756)
    at org.apache.hadoop.ipc.Server.access$2000(Server.java:97)
    at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:780)
    at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:844)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1472)
2014-02-01 15:24:26,800 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.9:36391, bytes: 10821100, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000084_0_-2100756773_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_496206494030330170_1187, duration: 439385255122
2014-02-01 15:27:11,871 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.20:32913, bytes: 462336, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000004_0_-1095467656_1, offset: 19968, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-7029660283973842017_1326, duration: 205748392367
2014-02-01 15:27:57,144 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.9:36393, bytes: 10865080, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000033_0_-1409402881_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_-8749840347184507986_1447, duration: 649481124760
2014-02-01 15:28:47,945 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded blk_887028200097641216_1396
2014-02-01 15:30:17,505 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.0.9:50010, dest: /192.168.0.8:58304, bytes: 10743459, op: HDFS_READ, cliID: DFSClient_attempt_201402011511_0001_m_000202_0_1200991434_1, offset: 0, srvID: DS-271028747-192.168.0.9-50010-1391093674214, blockid: blk_887028200097641216_1396, duration: 69130787562
2014-02-01 15:32:05,208 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020) Starting thread to transfer blk_-7029660283973842017_1326 to 192.168.0.8:50010
2014-02-01 15:32:55,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.9:50010, storageID=DS-271028747-192.168.0.9-50010-1391093674214, infoPort=50075, ipcPort=50020) Starting thread to transfer blk_-34479901

以下是我的mapred-site.xml文件的设置方式:

<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx2048m</value>
</property>

<property>
    <name>mapred.tasktracker.map.tasks.maximum</name>
    <value>4</value>
</property>

<property>
    <name>mapred.tasktracker.reduce.tasks.maximum</name>
    <value>4</value>
</property>

每个节点有8个CPU和8GB RAM。我知道我已将mapred.child.java.opts设置得太高,但使用这些设置和数据时,用于运行的作业相同。我已将slow slowstart设置为1.0,因此reducers仅在所有映射器完成后才启动。

Ping一些节点导致一小部分数据包丢失,ssh连接冻结一段时间,但我不知道它是否相关。我在每个节点上的/etc/security/limits.conf文件中添加了以下行:

hadoop hard nofile 16384

但这也不起作用。

解决方案:看来毕竟是内存错误。我有太多的正在运行的任务,计算机崩溃了。在他们崩溃并重新启动它们之后,即使我设置了正确的映射器数量,hadoop作业也没有运行。解决方案是删除坏数据节点(通过退役),然后再次包含它们。这就是我所做的,一切都完美无缺,不会丢失任何数据:

How do I correctly remove nodes in Hadoop?

当然,设置正确数量的max map并减少每个节点的任务。

2 个答案:

答案 0 :(得分:5)

根据映射,你的内存不足,你有2GB RAM,允许4个地图。

请尝试使用1 GB xmx运行相同的工作,这肯定会有用。

如果您想使用您的群集,请根据文件的块大小有效地设置xmx。

如果你的块是128 Mb,那么512 mb就足够了。

答案 1 :(得分:3)

这项工作是否在M和R之间有一个合并器步骤?我遇到了一个问题,即在占用大量内存的任务中的同一节点上同时发生高mem map和Combine步骤。使用您的配置如果您正在发生2个地图和2个组合,那么如果您在内存中有大型对象,则可以使用8G ram。只是一个想法。