我正在学习Hadoop
这个问题困扰了我一段时间。基本上我正在写一个SequenceFile
到磁盘然后再读回来。但是,每次我在阅读时都获得EOFException
。更深入的研究表明,在编写序列文件时,它会过早地被截断,并且总是在写入索引962之后发生,并且文件总是具有45056字节的固定大小。
我在MacBook Pro上使用Java 8和Hadoop 2.5.1。事实上,我在Java 7下的另一台Linux机器上尝试了相同的代码,但同样的事情发生了。
我可以排除作家/读者没有正确关闭。我尝试使用旧样式的try / catch和一个显式的writer.close(),如代码所示,并使用更新的try-with-resource方法。两者都不起作用。
任何帮助都将受到高度赞赏。
以下是我正在使用的代码:
public class SequenceFileDemo {
private static final String[] DATA = { "One, two, buckle my shoe",
"Three, four, shut the door",
"Five, six, pick up sticks",
"Seven, eight, lay them straight",
"Nine, ten, a big fat hen" };
public static void main(String[] args) throws Exception {
String uri = "file:///Users/andy/Downloads/puzzling.seq";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
Path path = new Path(uri);
IntWritable key = new IntWritable();
Text value = new Text();
//API change
try {
SequenceFile.Writer writer = SequenceFile.createWriter(conf,
stream(fs.create(path)),
keyClass(IntWritable.class),
valueClass(Text.class));
for ( int i = 0; i < 1024; i++ ) {
key.set( i);
value.clear();
value.set(DATA[i % DATA.length]);
writer.append(key, value);
if ( (i-1) %100 == 0 ) writer.hflush();
System.out.printf("[%s]\t%s\t%s\n", writer.getLength(), key, value);
}
writer.close();
} catch (Exception e ) {
e.printStackTrace();
}
try {
SequenceFile.Reader reader = new SequenceFile.Reader(conf,
SequenceFile.Reader.file(path));
Class<?> keyClass = reader.getKeyClass();
Class<?> valueClass = reader.getValueClass();
boolean isWritableSerilization = false;
try {
keyClass.asSubclass(WritableComparable.class);
isWritableSerilization = true;
} catch (ClassCastException e) {
}
if ( isWritableSerilization ) {
WritableComparable<?> rKey = (WritableComparable<?>) ReflectionUtils.newInstance(keyClass, conf);
Writable rValue = (Writable) ReflectionUtils.newInstance(valueClass, conf);
while(reader.next(rKey, rValue)) {
System.out.printf("[%s] %d %s=%s\n",reader.syncSeen(), reader.getPosition(), rKey, rValue);
}
} else {
//make sure io.seraizliatons has the serialization in use when write the sequence file
}
reader.close();
} catch(IOException e) {
e.printStackTrace();
}
}
}
答案 0 :(得分:1)
我认为你在写循环后缺少writer.close()。这样可以保证在你开始阅读之前进行最后的冲洗。
答案 1 :(得分:1)
我实际上发现了错误,因为您永远不会在Writer.stream(fs.create(path))
中关闭创建的流。
由于某种原因,关闭不会传播到您刚刚在那里创建的流。这是我想的一个错误,但我现在懒得在Jira中查找它。
解决问题的一种方法是简单地使用Writer.file(path)
。
显然,您也可以只显式关闭创建流。在下面找到我更正的例子:
Path path = new Path("file:///tmp/puzzling.seq");
try (FSDataOutputStream stream = fs.create(path)) {
try (SequenceFile.Writer writer = SequenceFile.createWriter(conf, Writer.stream(stream),
Writer.keyClass(IntWritable.class), Writer.valueClass(NullWritable.class))) {
for (int i = 0; i < 1024; i++) {
writer.append(new IntWritable(i), NullWritable.get());
}
}
}
try (SequenceFile.Reader reader = new SequenceFile.Reader(conf, Reader.file(path))) {
Class<?> keyClass = reader.getKeyClass();
Class<?> valueClass = reader.getValueClass();
WritableComparable<?> rKey = (WritableComparable<?>) ReflectionUtils.newInstance(keyClass, conf);
Writable rValue = (Writable) ReflectionUtils.newInstance(valueClass, conf);
while (reader.next(rKey, rValue)) {
System.out.printf("%s = %s\n", rKey, rValue);
}
}
答案 2 :(得分:0)
感谢托马斯。
归结为如果作家创造了&#34;拥有&#34;没有的流。创建编写器时,如果我们传入选项 Writer.file(path),则编写者&#34;拥有&#34;内部创建的基础流,并在调用close()时关闭它。然而,如果我们传入 Writer.stream(aStream),编写者会假定其他人对该流做出响应,并且在调用close()时不会将其关闭。简而言之,它不是一个错误,只是我不太了解它。