与put / copyFromLocal的hadoop连接错误

时间:2012-10-26 16:02:34

标签: hadoop connection localhost connect

我正在按照教程安装hadoop:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ 现在我被困在“将本地示例数据复制到HDFS”步骤。

我得到的连接错误:

<12/10/26 17:29:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
12/10/26 17:29:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
12/10/26 17:29:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
12/10/26 17:29:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
12/10/26 17:29:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
12/10/26 17:29:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
12/10/26 17:29:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
12/10/26 17:29:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
12/10/26 17:29:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
12/10/26 17:29:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused

这个问题已经差不多了: Errors while running hadoop

现在的观点是,我已经禁用了ivp6,如上所述和上面的教程所述,但它没有帮助。我有什么遗失的东西吗?

编辑:

我在第二台机器上用新安装的ubuntu重复了教程,并逐步进行了比较。事实证明,hduser的bashrc配置中存在一些错误。之后它运作良好......

3 个答案:

答案 0 :(得分:4)

如果我在DataNode / NameNode未运行时尝试Hadoop fs <anything>,我会收到确切的错误消息,所以我猜你也会这样做。

在终端中输入jps。如果一切正常,它应该看起来像:

16022 DataNode
16524 Jps
15434 TaskTracker
15223 JobTracker
15810 NameNode
16229 SecondaryNameNode

我打赌你的DataNode或NameNode没有运行。如果jps的打印输出中缺少任何内容,请再次启动。

答案 1 :(得分:0)

整个配置后给出此命令

hadoop namenode -formate

并通过此命令启动所有服务

start-all.sh

这将解决您的问题

答案 2 :(得分:0)

  1. 转到etc / hadoop / core-site.xml。检查fs.default.name的值 它应该如下所示。 { fs.default.name HDFS://本地主机:54310 }
  2. 在整个配置发出此命令后
  3. hadoop namenode -format

    1. 通过此命令启动所有服务
    2. start-all.sh

      这将解决您的问题。

      您的namenode可能处于安全模式,运行bin / hdfs dfsadmin -safemode leave或bin / hadoop dsfadmin -safemode leave 然后按照步骤-2和步骤-3

      进行操作