运行TeraSort时Datanode不启动

时间:2015-03-13 01:42:38

标签: hadoop mapreduce hdfs bigdata master-slave

我有4个奴隶(包括主人)。当我运行TeraSort时,我在我的一个奴隶中收到此错误。数据节点在运行之前已经启动,但是当我运行我的一个DataNode时,计算由其余3个从属完成:

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-5677299757617064640_1010 received exception java.io.IOException: Connection reset by peer

2015-03-12 16:42:06,835 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020):DataXceiver

java.io.IOException: Connection reset by peer (this is one error same log same run )

2015-03-12 16:42:09,809 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020): Exception writing block blk_2791945666924613489_1015 to mirror 192.168.0.112:50010

java.io.IOException: Broken pipe(Second error)

2015-03-12 16:42:09,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_2791945666924613489_1015 received exception java.io.EOFException: while trying to read 65557 bytes(third error same run)

我被困在这里。任何帮助表示赞赏!

任务跟踪器日志:

 WARN org.apache.hadoop.mapred.TaskTracker: Failed validating JVM
java.io.IOException: JvmValidate Failed. Ignoring request from task: attempt_201503121637_0001_m_000040_0, with JvmId: jvm_201503121637_0001_m_-2136609016
        at org.apache.hadoop.mapred.TaskTracker.validateJVM(TaskTracker.java:3278)
        at org.apache.hadoop.mapred.TaskTracker.statusUpdate(TaskTracker.java:3348)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2015-03-12 16:43:02,577 WARN org.apache.hadoop.mapred.DefaultTaskController: Exit code from task is : 143
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.DefaultTaskController: Output from DefaultTaskController's launchTask follows:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.TaskController:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201503121637_0001_m_1555953113 exited with exit code 143. Number of tasks it ran: 1
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201503121637_0001_m_000054_0 task's state:UNASSIGNED
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201503121637_0001_m_000048_0
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201503121637_0001_m_000054_0 which needs 1 slots
2015-03-12 16:43:02,600 INFO org.apache.hadoop.mapred.TaskTracker: TaskLauncher : Waiting for 1 to launch attempt_201503121637_0001_m_000054_0, currently we have 0 free slots
2015-03-12 16:43:03,618 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201503121637_0001_m_1496188144 given task: attempt_201503121637_0001_m_000051_0

2 个答案:

答案 0 :(得分:0)

TaskTracker日志更具描述性。你能在日志中显示什么内容吗?

同时检查服务器是否在正确的端口上运行。

您可以尝试这样做,将hadoop核心jar从工作datanode复制并替换为失败的datanode,然后重新启动mapreduce服务。

还有一件事要检查,在工作数据节点上执行netstat以查看tasktracker服务器正在运行的端口,然后检查tasktracker服务是否在故障节点上的同一端口上运行。

我猜tasktracker的默认端口是50060。

因为端口很好,所以当reduce请求发出的请求没有完成,或者结果被截断时,由peer发生连接重置,如果它找不到合适的文件也可能发生(也可能因为权限)。

答案 1 :(得分:0)

我解决了这个问题。问题是我通过root到我的奴隶做SSH,工作跟踪器和任务跟踪器之间的通信太频繁,因此问题(连接被同行拒绝)。我在主服务器和从服务器之间建立了无密码的SSH连接,现在工作正常。(你需要通过hduser或在hadoop组中创建的用户进行SSH)

感谢Sahitya的时间和帮助!感谢它!

-Vinod