我可以启动hadoop成功但是datanode [slave]无法连接namenode [master]
2016-11-09 16:00:15,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.101:9000
2016-11-09 16:00:21,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-09 16:00:22,965 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
详细信息/ etc / hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.101 master
192.168.1.102 slave1
芯-site.xml中
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
和hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/volume/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/volume/datanode</value>
</property>
答案 0 :(得分:0)
1)检查防火墙是否限制端口
sudo iptables -L
如果是,请将其冲洗
要打开9000,
$ sudo iptables -A INPUT -p tcp -m tcp --dport 9000 -j ACCEPT
$ sudo /etc/init.d/iptables save
2)检查namenode日志,查看/var/log/hadoop