在Flink中读取Hadoop序列文件

时间:2019-05-24 21:17:55

标签: apache-flink

如何在Flink中读取Hadoop序列文件?我使用以下方法遇到了多个问题。

我有:

DataSource<String> source = env.readFile(new SequenceFileInputFormat(config), filePath);

public static class SequenceFileInputFormat extends FileInputFormat<String> {
    ...
    @Override
    public void setFilePath(String filePath) {
        org.apache.hadoop.conf.Configuration config = HadoopUtils.getHadoopConfiguration(configuration);
        logger.info("Initializing:"+filePath);
        org.apache.hadoop.fs.Path hadoopPath = new org.apache.hadoop.fs.Path(filePath);

        try {
            reader = new SequenceFile.Reader(hadoopPath.getFileSystem(config), hadoopPath, config);
            key = (Writable) ReflectionUtils.newInstance(reader.getKeyClass(), config);
            value = (Writable) ReflectionUtils.newInstance(reader.getValueClass(), config);
        } catch (IOException e) {
            logger.error("sequence file creation failed.", e);
        }
    }

}

问题之一:无法读取用户代码包装器:SequenceFileInputFormat。

1 个答案:

答案 0 :(得分:1)

获得InputFormat后,您可以致电ExecutionEnvironment.createInput(<input format>)来创建自己的DataSource

对于SequenceFile,数据类型始终为Tuple2<key, value>,因此您必须使用map函数转换为要读取的任何类型。

我使用此代码读取包含级联元组的SequenceFile ...

Job job = Job.getInstance();
FileInputFormat.addInputPath(job, new Path(directory));
env.createInput(HadoopInputs.createHadoopInput(new SequenceFileInputFormat<Tuple, Tuple>(), Tuple.class, Tuple.class, job);