HDFS的DataXceiver错误(无效块)

时间:2014-02-22 07:49:26

标签: hadoop hdfs

我有一个小的hadoop集群,8个节点运行本机hadoop 1.0.2。不同节点上的NN和SNN。节点本身增强了20+千兆内存的节点。我在Datanodes日志中反复看到DataCeiver错误。 (Usig with hive and pig)

我知道可能是因为HDFS-Site.xml中的设置,我将其设置为4096,如下所示:

  <property><name>dfs.datanode.max.xcievers</name><value>4096</value></property>

另外,我已将ulimit设置为与HDFS用户相同。我仍然得到错误

 2014-02-22 00:40:36,021 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.22.96.9:50010, storageID=DS-1389939194-10.22.96.9-50010-1345070063427, infoPort=50075,\
 ipcPort=50020):Got exception while serving blk_-1430839469926724904_1952628 to /10.22.96.9:
java.io.IOException: Block blk_-1430839469926724904_1952628 is not valid.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1072)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1035)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisibleLength(FSDataset.java:1045)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:94)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:189)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
        at java.lang.Thread.run(Thread.java:662)

 2014-02-22 00:40:36,021 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.22.96.9:50010, storageID=DS-1389939194-10.22.96.9-50010-1345070063427, infoPort=50075\
, ipcPort=50020):DataXceiver
java.io.IOException: Block blk_-1430839469926724904_1952628 is not valid.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1072)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1035)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisibleLength(FSDataset.java:1045)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:94)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:189)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
        at java.lang.Thread.run(Thread.java:662)

对此事件的任何暗示或解释一次又一次对我更好地理解这个问题非常有帮助。或者我如何调试此问题以在root或至少解决此问题时能够向某人询问:)

感谢您的关注, -Atul

0 个答案:

没有答案