将文件复制到HDFS时出错

时间:2014-08-25 03:35:19

标签: hadoop hdfs

Hadoop集群正常启动,JPS显示数据节点和tasktracker正常运行。 当我将文件复制到HDFS时,这是我得到的错误信息。

hduser@nn:~$ hadoop fs -put gettysburg.txt /user/hduser/getty/gettysburg.txt

Warning: $HADOOP_HOME is deprecated.
14/08/24 21:12:50 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:51 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:52 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:53 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:54 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:55 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:56 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:57 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:58 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:59 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to nn/10.10.1.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
hduser@nn:~$ 

我可以从NN到DN和Viceverssa以及DN之间进行ssh。


我已经更改了所有NN和DN中的cd / etc / hosts,如下所示。

#127.0.0.1      localhost loghost localhost.project1.ch-geni-net.emulab.net
#10.10.1.1      NN-Lan NN-0 NN
#10.10.1.2      DN1-Lan DN1-0 DN1
#10.10.1.3      DN2-Lan DN2-0 DN2
#10.10.1.5      DN4-Lan DN4-0 DN4
#10.10.1.4      DN3-Lan DN3-0 DN3
10.10.1.1       nn
10.10.1.2       dn1
10.10.1.3       dn2
10.10.1.4       dn3
10.10.1.5       dn4

我的mapredsite.xml看起来像这样。

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://nn:54310</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHE$
</property>
</configuration>

配置cd / usr / local / hadoop / conf / master

hduser@nn:/usr/local/hadoop/conf$ vi masters 

#localhost
nn

hduser@dn1:~$ jps
9975 DataNode
10186 Jps
10070 TaskTracker
hduser@dn1:~$ 

hduser@nn:~$ jps
5979 JobTracker
5891 SecondaryNameNode
6159 Jps
hduser@nn:~$ 

有什么问题?

2 个答案:

答案 0 :(得分:0)

检查core-site.xml文件中的fs.default.name属性。值应为hdfs:// NN:port。

答案 1 :(得分:0)

检查以下内容:

  1. core-site.xml - 提到的hdfs网址 - hdfs:// ip:port
  2. 格式名称节点
  3. 检查安全模式是否已打开