我在一个单节点集群中运行Hadoop 0.21.0来处理一个大的> 200 GB文件。为了减少执行时间,我分别尝试了不同的HDFS块大小(128,256,512 MB,1,1.5,1.75 GB)。但是,使用块大小> = 2 GB时,我遇到以下异常。
注意:我使用的是java-8-oracle。
2015-08-05 12:02:12,524 WARN org.apache.hadoop.mapred.Child: Exception running child : java.lang.IndexOutOfBoundsException
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:186)
at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:113)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:466)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1518)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1483)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1451)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1432)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:60)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:460)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.mapred.Child.main(Child.java:211)
答案 0 :(得分:2)
对于你使用的Hadoop版本(0.21.0)似乎是这样。
您已解决的问题已针对下一版本修复,请参阅此处:https://issues.apache.org/jira/browse/HDFS-96