这个错误一直被我的hadoop nutch crawler抛出。所有节点上都有足够的可用空间。我不确定如何继续。
完整的错误是:
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /crawl/segments/20170211181653/crawl_parse/part-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1575)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
编辑包含hdfs dfsadmin -report
的输出Configured Capacity: 85316812800 (79.46 GB)
Present Capacity: 84047159296 (78.28 GB)
DFS Remaining: 83300806656 (77.58 GB)
DFS Used: 746352640 (711.78 MB)
DFS Used%: 0.89%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (3):
Name: 10.0.0.175:50010 (ip-10-0-0-175.ec2.internal)
Hostname: ip-10-0-0-175.ec2.internal
Decommission Status : Normal
Configured Capacity: 28438937600 (26.49 GB)
DFS Used: 222629888 (212.32 MB)
Non DFS Used: 422780928 (403.20 MB)
DFS Remaining: 27793526784 (25.88 GB)
DFS Used%: 0.78%
DFS Remaining%: 97.73%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Feb 21 20:31:36 UTC 2017
Name: 10.0.0.4:50010 (ip-10-0-0-4.ec2.internal)
Hostname: ip-10-0-0-4.ec2.internal
Decommission Status : Normal
Configured Capacity: 28438937600 (26.49 GB)
DFS Used: 248160256 (236.66 MB)
Non DFS Used: 423477248 (403.86 MB)
DFS Remaining: 27767300096 (25.86 GB)
DFS Used%: 0.87%
DFS Remaining%: 97.64%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Feb 21 20:31:38 UTC 2017
Name: 10.0.0.11:50010 (ip-10-0-0-11.ec2.internal)
Hostname: ip-10-0-0-11.ec2.internal
Decommission Status : Normal
Configured Capacity: 28438937600 (26.49 GB)
DFS Used: 275562496 (262.80 MB)
Non DFS Used: 423395328 (403.78 MB)
DFS Remaining: 27739979776 (25.83 GB)
DFS Used%: 0.97%
DFS Remaining%: 97.54%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Feb 21 20:31:36 UTC 2017
答案 0 :(得分:-1)
仓/ stop-all.sh
缓存目录存在于var / lib / hadoop-hdfs / cache文件夹中。 您在mapred-site.xml中使用属性提到的临时目录 hadoop.tmp.dir。
bin / hadoop namenode -format
仓/ start-all.sh
这将解决您的问题。