Hadoop - 伪分布式操作

时间:2013-11-28 01:19:57

标签: hadoop

我正在尝试使用以下命令将文件quangle.txt从我的localsystem复制到Hadoop:

testuser@ubuntu:~/Downloads/hadoop/bin$ ./hadoop fs -copyFromLocal Desktop/quangle.txt hdfs://localhost/testuser/quangle.txt

13/11/28 06:35:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/11/28 06:35:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
copyFromLocal: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused

我尝试ping 127.0.0.1并获得了响应。请建议

6 个答案:

答案 0 :(得分:3)

在localhost:

之后添加正确的端口到文件路径
hdfs://localhost:9000/testuser/quangle.txt

答案 1 :(得分:2)

看起来你的名字节点没有运行 - 尝试运行jps cmd并查看正在运行的服务中是否列出了NameNode(如果是,则可能需要运行ps axww | grep NameNode NameNode由/在另一个用户下启动

sudo netstat -atnp | grep 8020会产生任何结果吗?

如果名称节点拒绝启动,则在您的名称节点日志中复制到您的原始问题(或发布一个新问题 - 首先搜索错误,看看是否有其他人遇到此问题)

答案 2 :(得分:1)

尝试运行jps以查看当前正在运行的Java进程。

是否所有Hadoop进程都在运行,尤其是Namemode?

如果是,您应该获得此输出(具有不同的进程ID):

10015 JobTracker
9670 TaskTracker
9485 DataNode
10380 Jps
9574 SecondaryNameNode
9843 NameNode

答案 3 :(得分:0)

我认为您可以使用hadoop fs -put ~/Desktop/quangle.txt /testuser,复制后,您可以通过/ testuser目录中的hadoop fs -ls /testuser查找

答案 4 :(得分:0)

您使用命令hadoop fs -mkdir testuser创建桌面和其他人然后尝试,它对我有用

答案 5 :(得分:0)

您的Pseudodistributed Mode设置可能有问题。 它应按此顺序配置:

  1. 填写配置文件:core-site.xml,hdfs-site.xml, mapred-site.xml,yar-site.xml。
  2. 配置SSH
  3. 格式化 HDFS文件系统
  4. 启动和停止守护进程