使用JPS在多节点集群中工作的所有服务,但“hdfs dfsadmin -report”不显示任何内容

时间:2017-09-01 06:52:41

标签: hadoop

[root@master ~]# jps
10197 SecondaryNameNode
10805 Jps
10358 ResourceManager
9998 NameNode

[root@slave1 ~]# jps
5872 NodeManager
5767 DataNode
6186 Jps

[root@slave2 ~]# jps
5859 Jps
5421 DataNode 
5534 NodeManager

正如您所看到的,当我在namenode和相应的从属节点“slave1”和“slave2”上运行JPS命令时,所有服务都在运行。

然而,当我检查hdfs dfsadmin -report命令时,我得到的是:

[root@master ~]# hdfs dfsadmin -report
17/09/01 12:11:29 WARN util.NativeCodeLoader: Unable to load native-hadoop          library for your platform... using builtin-java classes where applicable
  Configured Capacity: 0 (0 B)
  Present Capacity: 0 (0 B)
  DFS Remaining: 0 (0 B)
  DFS Used: 0 (0 B)
  DFS Used%: NaN%
  Under replicated blocks: 0
  Blocks with corrupt replicas: 0
  Missing blocks: 0
  Missing blocks (with replication factor 1): 0

  -------------------------------------------------

这就是问题所在。我知道有很多关于这个特定主题的文章,我已经提到并禁用了我的防火墙,也使用datanode集群ID格式化了集群,并解决了Virtual Box中我遇到重复数据包时的IP问题从主人那里ping奴隶。

数据节点似乎没有启动。幸运的是,即使幸运的话,我在HDFS上复制文件时也会出现以下错误。

[root@master ~]# hdfs dfs -moveFromLocal /home/master/Downloads                    /citibike.tar /user/citibike
17/09/01 12:17:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/01 12:17:34 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File                 /user/citibike._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1628)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3121)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3045)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
 moveFromLocal: File /user/citibike._COPYING_ could only be replicated to 0    nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

fsck命令工作正常,但没有用,因为它也像dfsadmin -report一样空。

0 个答案:

没有答案