在Mapper类中检索的键和值的空/空值

时间:2013-01-09 09:08:29

标签: hadoop mapreduce cloudera

我编写了一个MapReduce代码,用于在CDH4群集上运行它。我的要求是将完整文件作为值读取,将文件名作为键。为此,我编写了自定义的InputFormat和RecordReader类。

自定义输入格式类:FullFileInputFormat.java

import java.io.*;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;

import FullFileRecordReader;

public class FullFileInputFormat extends FileInputFormat<Text, Text> {

    @Override
    public RecordReader<Text, Text> getRecordReader(InputSplit split, JobConf jobConf, Reporter reporter) throws IOException {
        reporter.setStatus(split.toString());
        return new FullFileRecordReader((FileSplit) split, jobConf);
    }
}

自定义RecordReader类:FullFileRecordReader.java

import java.io.BufferedReader;
import java.io.IOException;

import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;

public class FullFileRecordReader implements RecordReader<Text, Text> {

    private BufferedReader in;
    private boolean processed = false;
    private int processedBytes = 0;

    private FileSplit fileSplit;
    private JobConf conf;

    public FullFileRecordReader(FileSplit fileSplit, JobConf conf) {
        this.fileSplit = fileSplit;
        this.conf = conf;
    }

    @Override
    public void close() throws IOException {
        if (in != null) {
            in.close();
        }
    }

    @Override
    public Text createKey() {
        return new Text("");
    }

    @Override
    public Text createValue() {
        return new Text("");
    }

    @Override
    public long getPos() throws IOException {
        return processedBytes;
    }

    @Override
    public boolean next(Text key, Text value) throws IOException {
        Path filePath = fileSplit.getPath();

        if (!processed) {
            key = new Text(filePath.getName());

            value = new Text("");
            FileSystem fs = filePath.getFileSystem(conf);
            FSDataInputStream fileIn = fs.open(filePath);
            byte[] b = new byte[1024];
            int numBytes = 0;

            while ((numBytes = fileIn.read(b)) > 0) {
                value.append(b, 0, numBytes);
                processedBytes += numBytes;
            }
            processed = true;
            return true;
        }
        return false;
    }

    @Override
    public float getProgress() throws IOException {
        return 0;
    }
}

虽然每当我尝试在RecordReader类中打印键值时,我都会得到它们的值,但是当我在mapper类中打印它时,我会看到它们的空白值。我无法理解为什么Mapper类无法获取键和值的任何数据。

目前我只有一个地图工作,没有减少工作。代码是:

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapred.*;

import FullFileInputFormat;

public class Source {

    public static class Map extends MapReduceBase implements Mapper<Text, Text, Text, Text> {

        public void map(Text key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws java.io.IOException {
            System.out.println("Processing " + key.toString());
            System.out.println("Value: " + value.toString());
        }
    }

    public static void main(String[] args) throws Exception {
        JobConf job = new JobConf(Source.class);
        job.setJobName("Source");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        job.setJarByClass(Source.class);
        job.setInputFormat(FullFileInputFormat.class);
        job.setMapperClass(Map.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        JobClient.runJob(job);
    }
}

1 个答案:

答案 0 :(得分:1)

您正在下一个方法中创建新实例 - hadoop重新使用对象,因此您需要填充传递的对象。它应该像修改如下一样简单:

@Override
public boolean next(Text key, Text value) throws IOException {
    Path filePath = fileSplit.getPath();

    if (!processed) {
        // key = new Text(filePath.getName());
        key.set(filePath.getName());

        // value = new Text("");
        value.clear();
    }

我还建议预先调整值文本的大小,以避免值的基础字节数组“增长”。 Text有一个名为setCapacity的私有方法,所以你不幸的是无法调用它 - 但是如果你使用BytesWritable来缓冲文件输入,你可以在你的下一个方法中调用setCapacity,传递fileSplit长度(注意这可能仍然是错误的如果你的文件是压缩的 - 因为文件大小是压缩的大小。)