运行hadoop时出错

时间:2011-12-14 09:01:28

标签: hadoop localhost

haduser@user-laptop:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/input 
/user/haduser/input

11/12/14 14:21:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).

11/12/14 14:21:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).

11/12/14 14:21:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).

11/12/14 14:21:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).

11/12/14 14:21:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).

11/12/14 14:21:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).

11/12/14 14:21:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).

11/12/14 14:21:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. -Already tried 7 time(s).

11/12/14 14:21:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).

11/12/14 14:21:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).

Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused

当我尝试将文件从/tmp/input复制到/user/haduser/input时,即使文件/etc/hosts包含localhost条目,我也会收到上述错误。 运行jps command时,TaskTrackernamenode未列出。

可能是什么问题?请有人帮我这个。

4 个答案:

答案 0 :(得分:9)

我有类似的问题 - 实际上Hadoop与IPv6绑定。 然后我添加了 - “export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true”到$HADOOP_HOME/conf/hadoop-env.sh

即使我在系统上禁用了IPv6,Hadoop也会绑定到IPv6。 一旦我将它添加到env,就开始正常工作了。

希望这有助于某人。

答案 1 :(得分:3)

尝试使用IP对本地系统执行ssh,在这种情况下:

$ ssh 127.0.0.1

一旦你能够成功地完成ssh。运行以下命令以了解打开端口列表

〜$ lsof -i

查找名称为:localhost:<的侦听连接器PORTNAME> (LISTEN)

复制此< PORTNAME>并替换hadoop conf文件夹中core-site.xml中fs.default.name属性标记中端口号的现有值

保存core-site.xml,这应解决问题。

答案 2 :(得分:1)

NameNode(NN)维护HDFS的命名空间,它应该在HDFS上运行文件系统操作。检查日志NN没有启动的原因。 HDFS上的操作不需要TaskTracker,只有NN和DN就足够了。查看http://goo.gl/8ogSkhttp://goo.gl/NIWoK教程,了解如何在单个和多个节点上设置Hadoop。

答案 3 :(得分:1)

bin中的所有文件都是exectuables。只需复制命令并将其粘贴到终端中即可。确保地址正确,即必须用某些东西替换用户。那就行了。