Hadoop没有列出任何奴隶

时间:2016-05-24 21:04:53

标签: hadoop distributed-computing slave

我已按照this guide为Hadoop设置了一个简单的群集。

虽然我无法在http://master:50070看到我的从属节点。

我已按照指南进行操作,直到启动Yarn MapReduce作业跟踪器,并且在主节点和从属节点上运行jps时,所有内容都应按原样列出。

hadoop-hadoopuser-datanode-slave-1.log我一遍又一遍地看到这些消息。

2016-05-25 13:26:11,884 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-05-25 13:26:11,886 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2016-05-25 13:26:13,028 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:14,029 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:15,031 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:16,032 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:17,033 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:18,034 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:19,035 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:20,036 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:21,037 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:22,038 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.1.32:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-05-25 13:26:22,040 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/10.0.1.32:54310

我错过了什么?

1 个答案:

答案 0 :(得分:1)

通过在主节点和从节点上从127.0.1.1删除/etc/hosts的行来解决此问题。