HADOOP copyFromLocal DataStreamer异常

时间:2014-06-17 11:49:56

标签: hadoop

我正在使用Hadoop 0.20.203,我有一个节点为0~24的集群。集群0用作NameNode,其他所有节点当前都用作DataNode。

我目前正在尝试执行WordCount示例,但是当我尝试将-copyFromLocal转换为DFS时,会显示以下消息:

aqjune@cluster0:~>> $HADOOP_HOME/bin/hadoop dfs -copyFromLocal pg132.txt /user/aqjune/input/pg132.txt
14/06/17 19:54:01 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:01 INFO hdfs.DFSClient: Abandoning block blk_-7530678618792869516_1003
14/06/17 19:54:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:07 INFO hdfs.DFSClient: Abandoning block blk_-7462751912508683911_1003
14/06/17 19:54:13 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:13 INFO hdfs.DFSClient: Abandoning block blk_252255837066920011_1003
14/06/17 19:54:19 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused
14/06/17 19:54:19 INFO hdfs.DFSClient: Abandoning block blk_4030900909035905642_1003
14/06/17 19:54:25 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3002)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)

14/06/17 19:54:25 WARN hdfs.DFSClient: Error Recovery for block blk_4030900909035905642_1003 bad datanode[0] nodes == null
14/06/17 19:54:25 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/aqjune/input/pg132.txt" - Aborting...
copyFromLocal: Connection refused
14/06/17 19:54:25 ERROR hdfs.DFSClient: Exception closing file /user/aqjune/input/pg132.txt : java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:406)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:3028)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2983)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)

然后只创建了空文件;

aqjune@grey0:~/hadoop>> bin/hadoop dfs -lsr /
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:45 /user
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:45 /user/aqjune
drwxr-xr-x   - aqjune supergroup          0 2014-06-17 19:54 /user/aqjune/input
-rw-r--r--   1 aqjune supergroup          0 2014-06-17 19:54 /user/aqjune/input/pg132.txt

我无法弄清楚这个问题的原因。我可以得到一些暗示吗?

0 个答案:

没有答案