我正在尝试建立一个多节点集群(Hadoop 1.0.4),并且所有守护进程都会出现。我有一个2节点集群,有一个主服务器和一个从服务器。我只将slave配置为datanode。
我可以看到所有守护进程 - 在Master和TaskTracker中运行的Namenode,Jobtracker和Secondary namenode,在Slave机器上运行Datanode。
但是当我尝试使用 hadoop fs -put 命令将数据加载到hdfs时,我收到以下错误
15/09/26 08:43:33 ERROR hdfs.DFSClient: Exception closing file /Hello : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /Hello could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
我做了一个 fsck 命令并收到以下消息。
FSCK started by hadoop from /172.31.18.149 for path / at Sat Sep 26 08:46:00 EDT 2015
Status: HEALTHY
Total size: 0 B
Total dirs: 5
Total files: 0 (Files currently being written: 1)
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 1
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 0
Number of racks: 0
不知何故,Datanode对Namenode不可用,但我无法弄明白为什么。
感谢任何帮助。谢谢!