Hadoop命令,hadoop fs -ls正在重试连接到服务器错误?

时间:2014-03-19 07:01:16

标签: hadoop hdfs hadoop2

当我输入hadoop fs -ls时,收到以下错误消息:

deepak@deepak:~$ hadoop fs -ls
14/03/19 12:18:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/03/19 12:18:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

hadoop namenode -format的输出

deepak@deepak:~/programs/hadoop-1.2.0/bin$ hadoop namenode -format
14/03/19 14:11:22 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = deepak/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.7.0_51
************************************************************/
14/03/19 14:11:22 INFO util.GSet: Computing capacity for map BlocksMap
14/03/19 14:11:22 INFO util.GSet: VM type       = 32-bit
14/03/19 14:11:22 INFO util.GSet: 2.0% max memory = 932184064
14/03/19 14:11:22 INFO util.GSet: capacity      = 2^22 = 4194304 entries
14/03/19 14:11:22 INFO util.GSet: recommended=4194304, actual=4194304
14/03/19 14:11:23 INFO namenode.FSNamesystem: fsOwner=deepak
14/03/19 14:11:23 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/19 14:11:23 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/19 14:11:23 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/19 14:11:23 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/19 14:11:23 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/03/19 14:11:23 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/03/19 14:11:23 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/03/19 14:11:24 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO common.Storage: Storage directory /tmp/hadoop-deepak/dfs/name has been successfully formatted.
14/03/19 14:11:24 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at deepak/127.0.1.1
************************************************************/

2 个答案:

答案 0 :(得分:3)

解决此问题的最佳方法是

使用jps命令检查Hadoop Deamons是否正常运行。 使用

格式化名称节点

bin / hadoop namenode -format

此处有更多信息..

http://www.77-thoughts.com/hadoop-info-ipc-client-retrying-connect-to-server-localhost127-0-0-19000/

此外,您可以在core-site.xml中设置不同的HDFS目录($ HADOOP_CONF_DIR)

答案 1 :(得分:2)

您可以查看您的Namenode状态吗?将'jps'放在namenode机器中并检查其状态。可能是因为Namenode失败了。