为什么我的datanode在hadoop集群上运行,但我仍然无法将文件放入hdfs?

时间:2015-07-21 14:21:57

标签: ubuntu hadoop

当我js on namenode

stillily@localhost:~$ jps
3669 SecondaryNameNode
3830 ResourceManager
3447 NameNode
4362 Jps

当我在datanode上jps时

stillily@localhost:~$ jps
3574 Jps
3417 NodeManager
3292 DataNode

但是当我把文件放入

stillily@localhost:~$ hadoop fs  -put txt hdfs://hadoop:9000/txt
15/07/21 22:08:32 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at 
.......
put: File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

我注意到datanode机器中没有“version”文件,但无论我运行多少次“hadoop namenode -format”,都会创建版本文件。

BTW ubuntu。

1 个答案:

答案 0 :(得分:0)

现在我知道原因是vm的​​ip已经改变了。我刚刚在namenode中修改了/ etc / hosts,但没有在datanode中修改它。