hdfs dfs -copyFromLocal Datanode拒绝连接

时间:2016-03-14 09:15:14

标签: hadoop

我使用两个节点创建了一个hadoop集群

h01:主机 - ubuntu桌面15.04

h02:在我的主机上使用vmware的虚拟机 - ubuntu server 14.04

jps命令显示h01上的namenode和secondarynamenode以及h02上的datanode,namenode的Web UI显示datanode,因此它们已成功连接。

问题在于我发出命令:

hdfs dfs -copyFromLocal input /

它出现以下错误:

    16/03/14 14:29:55 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1610)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

我是hadoop的新手,我们将不胜感激。以下是我的配置文件:

文件:/ etc / hosts machine:h01

127.0.0.1 localhost
127.0.1.1 hitesh-SVE15136CNB
192.168.93.128 h02
172.16.87.68 h01
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

文件:/ etc / hosts machine:h02

127.0.0.1   localhost
127.0.1.1   ubuntu
172.16.87.68 h01
192.168.93.128 h02
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

file:core-site.xml machine:两者相同

<configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://h01:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoop</value>
        </property>
</configuration>

file:hdfs-site.xml machine:两者相同

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

主设备包含h01,从设备包含h02。我确保两台机器之间没有密码的ssh连接。

编辑:

我发现了问题。在namenode UI的datanodes选项卡中,它显示正确的datanode但错误的IP(它显示了namenode而不是datanode的ip)。 我尝试在另一个虚拟机中安装namenode,它正在运行。 但还是无法理解上面提到的配置是错误的。 请帮忙

1 个答案:

答案 0 :(得分:0)

见下面的网址。它很有用: -

https://wiki.apache.org/hadoop/ConnectionRefused