在停止并重新启动所有Hadoop守护程序后,无法访问在HDFS中创建的目录

时间:2013-07-11 06:24:57

标签: hadoop hdfs

我是Hadoop的新手,我遇到了一些问题,但我无法找到任何解决方案,我的问题如下:

**Created a directory on HDFS using below command:
 --bin/hadoop fs -mkdir /user/abhijit/apple_poc
**Checking if my directory has been created:
 --bin/hadoop fs -ls
 --(output)-->drwxr-xr-x   - abhijit supergroup          0 2013-07-11 11:09 /user/abhijit/apple_poc
**Stopping all hadoop daemons:
 --bin/stop-all.sh 
**Restarting all the daemons again:
 --bin/start-all.sh 
**Again checking if my directory on HDFS created above is present or not:
 --bin/hadoop fs -ls
 --(output):
 2013-07-11 11:37:57.304 java[3457:1903] Unable to load realm info from SCDynamicStore
 13/07/11 11:37:58 INFO ipc.Client: Retrying connect to server:localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:37:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 13/07/11 11:38:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
 Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused

请澄清..

  1. 我真的不确定我在做什么wronge,或者属性文件中有什么要改变的吗?

  2. HDFS默认目录存储是/ user //,我应该更改此默认目录以便解决我的问题吗?

  3. 每次我必须格式化namenode以解决此问题,但在格式化后,上面创建的目录将丢失。

  4. 请告诉我这背后的问题是什么.. 非常感谢您的帮助。

    谢谢, 作者Abhijit

1 个答案:

答案 0 :(得分:2)

由于多种原因发生此错误,我一直在玩hadoop。多次出现此问题且原因不同

  1. 如果主节点未运行 - >检查日志
  2. 如果主机文件中没有提到正确的IP [在设置主机名后,在主机文件中提供其IP以便其他节点可以访问它]