我正在运行单节点hadoop环境。我有一个mapreduce工作来计算某些特定时间段的某些受监控信息的平均值,比如每小时平均值。这个作业将输出写入hdfs内的路径。它在运行作业之前得到清理。它工作正常一个月。昨天,在找工作的时候,我从jobclient那里得到了一个例外,他说:
文件/用户/ root / out1 / _temporary / _attempt_201401141113_0007_r_000000_0 / hi / 130-r-00000只能复制到0个节点,而不是1
完整的堆栈跟踪如下:
..........
14/01/17 12:00:09 INFO mapred.JobClient: map 100% reduce 32%
14/01/17 12:00:12 INFO mapred.JobClient: map 100% reduce 74%
14/01/17 12:00:17 INFO mapred.JobClient: Task Id : attempt_201401141113_0007_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy2.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
从谷歌最初的搜索来看,关于存储空间的问题。但我不这么认为,因为我的整个输入数据应该小于 600MB 并且有 1.5GB左右节点上的可用空间。我运行了 hadoop dfsadmin -report 命令,它返回如下:
$hadoop dfsadmin -report
Configured Capacity: 11353194496 (10.57 GB)
Present Capacity: 2354425856 (2.19 GB)
DFS Remaining: 1633726464 (1.52 GB)
DFS Used: 720699392 (687.31 MB)
DFS Used%: 30.61%
Under replicated blocks: 49
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 192.168.1.149:50010
Decommission Status : Normal
Configured Capacity: 11353194496 (10.57 GB)
DFS Used: 720699392 (687.31 MB)
Non DFS Used: 8998768640 (8.38 GB)
DFS Remaining: 1633726464(1.52 GB)
DFS Used%: 6.35%
DFS Remaining%: 14.39%
Last contact: Fri Jan 17 04:36:55 GMT+05:30 2014
请给我一个解决方案。这可能是一个配置问题。我不太了解hadoop配置。请帮助..
答案 0 :(得分:0)
我认为,也许,你的问题实际上是一个空间问题。您有一个副本集,因此如果您的输入为600 MB,则群集上将需要1.2 GB。
你仍然有300mb空闲,这可能不足以在节点之间发送数据。
我的建议是使用较小的数据集来检查这是否是问题,大约300mb或更少。然后,如果您不以这种方式解决,请尝试在conf / hdfs-site.xml上将副本设置为0
<property>
<name>dfs.replication</name>
<value>0</value>
</property>