我对Hadoop很新。我使用以下命令启动Hadoop ...
[gpadmin@BigData1-ahandler root]$ /usr/local/hadoop-0.20.1/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-namenode-BigData1-ahandler.out
localhost: starting datanode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-datanode-BigData1-ahandler.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-secondarynamenode-BigData1-ahandler.out
starting jobtracker, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-jobtracker-BigData1-ahandler.out
localhost: starting tasktracker, logging to /usr/local/hadoop-0.20.1/logs/hadoop-gpadmin-tasktracker-BigData1-ahandler.out
当我尝试从以下目录输出-cat时,出现错误:“没有节点可用”。这个错误是什么意思?我该如何解决?或者开始调试它?
[gpadmin@BigData1-ahandler root]$ hadoop fs -cat output/d*/part-*
13/11/13 15:33:09 INFO hdfs.DFSClient: No node available for block: blk_-5883966349607013512_1099 file=/user/gpadmin/output/d15795/part-00000
13/11/13 15:33:09 INFO hdfs.DFSClient: Could not obtain block blk_-5883966349607013512_1099 from any node: java.io.IOException: No live nodes contain current block
答案 0 :(得分:0)
当您在namenode之前启动datanode时会发生这种情况。
当datnode在namenode开始之前启动时,datanode服务会尝试检入namenode&没说"namenode not found"
。然后,一旦namenode启动,它就没有签入数据节点,因此它找不到正在访问的数据块所在的节点。
您应该浏览脚本start-all.sh
并确保namenode在datanode之前启动。