Hadoop:使用自定义InputFormat的NullPointerException

时间:2015-01-29 11:18:18

标签: java hadoop mapreduce

我为Hadoop开发了一个自定义InputFormat(包括自定义InputSplit和自定义RecordReader),我遇到了罕见的NullPointerException

这些类将用于查询第三方系统,该系统公开用于记录检索的REST API。因此,我在DBInputFormat获得灵感,这也是非HDFS InputFormat

我得到的错误如下:

Error: java.lang.NullPointerException at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:762)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

我搜索了MapTask(2.1.0版本的Hadoop)的代码,我发现有问题的部分是RecordReader的初始化:

472 NewTrackingRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
473       org.apache.hadoop.mapreduce.InputFormat<K, V> inputFormat,
474       TaskReporter reporter,
475       org.apache.hadoop.mapreduce.TaskAttemptContext taskContext)
476       throws InterruptedException, IOException {
...
491    this.real = inputFormat.createRecordReader(split, taskContext);
...
494 }
...
519 @Override
520 public void initialize(org.apache.hadoop.mapreduce.InputSplit split,
521       org.apache.hadoop.mapreduce.TaskAttemptContext context
522       ) throws IOException, InterruptedException {
523    long bytesInPrev = getInputBytes(fsStats);
524    real.initialize(split, context);
525    long bytesInCurr = getInputBytes(fsStats);
526    fileInputByteCounter.increment(bytesInCurr - bytesInPrev);
527 }

当然,我的代码的相关部分:

# MyInputFormat.java

public static void setEnvironmnet(Job job, String host, String port, boolean ssl, String APIKey) {
    backend = new Backend(host, port, ssl, APIKey);
}

public static void addResId(Job job, String resId) {
    Configuration conf = job.getConfiguration();
    String inputs = conf.get(INPUT_RES_IDS, "");

    if (inputs.isEmpty()) {
        inputs += restId;
    } else {
        inputs += "," + resId;
    }

    conf.set(INPUT_RES_IDS, inputs);
}

@Override
public List<InputSplit> getSplits(JobContext job) {
    // resulting splits container
    List<InputSplit> splits = new ArrayList<InputSplit>();

    // get the Job configuration
    Configuration conf = job.getConfiguration();

    // get the inputs, i.e. the list of resource IDs
    String input = conf.get(INPUT_RES_IDS, "");
    String[] resIDs = StringUtils.split(input);

    // iterate on the resIDs
    for (String resID: resIDs) {
       splits.addAll(getSplitsResId(resID, job.getConfiguration()));
    }

    // return the splits
    return splits;
}

@Override
public RecordReader<LongWritable, Text> createRecordReader(InputSplit split, TaskAttemptContext context) {
    if (backend == null) {
        logger.info("Unable to create a MyRecordReader, it seems the environment was not properly set");
        return null;
    }

    // create a record reader
    return new MyRecordReader(backend, split, context);
}

# MyRecordReader.java

@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {
    // get start, end and current positions
    MyInputSplit inputSplit = (MyInputSplit) this.split;
    start = inputSplit.getFirstRecordIndex();
    end = start + inputSplit.getLength();
    current = 0;

    // query the third-party system for the related resource, seeking to the start of the split
    records = backend.getRecords(inputSplit.getResId(), start, end);
}

# MapReduceTest.java

public static void main(String[] args) throws Exception {
    int res = ToolRunner.run(new Configuration(), new MapReduceTest(), args);
    System.exit(res);
}

@Override
public int run(String[] args) throws Exception {
    Configuration conf = this.getConf();
    Job job = Job.getInstance(conf, "MapReduce test");
    job.setJarByClass(MapReduceTest.class);
    job.setMapperClass(MyMap.class);
    job.setCombinerClass(MyReducer.class);
    job.setReducerClass(MyReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    job.setInputFormatClass(MyInputFormat.class);
    MyInputFormat.addInput(job, "ca73a799-9c71-4618-806e-7bd0ca1911f4");
    InputFormat.setEnvironmnet(job, "my.host.com", "443", true, "my_api_key");
    FileOutputFormat.setOutputPath(job, new Path(args[0]));
    return job.waitForCompletion(true) ? 0 : 1;
}

关于什么是错的任何想法?

BTW,InputSplit必须使用的“好”RecordReader,给构造函数或initialize方法中给出的那个?无论如何,我已经尝试了两个选项,结果错误是相同的:)

2 个答案:

答案 0 :(得分:1)

我在第524行读取你的strack trace real的方式为null。

但是不要接受我的话。在其中滑动assertsystem.out.println并自行检查real的值。

NullPointerException几乎总是意味着你点缀了一些你没想到的东西。有些图书馆和馆藏会以你的方式把它扔给你&#34;这可能是空的&#34;。

Error: java.lang.NullPointerException at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:524)

对我而言,其内容如下:在org.apache.hadoop.mapred包中,MapTask类有一个内部类NewTrackingRecordReader,其中initialize方法引发了NullPointerException第524行。

524 real.initialize( blah, blah) // I actually stopped reading after the dot
第491行设置了

this.real

491 this.real = inputFormat.createRecordReader(split, taskContext);

假设您没有遗漏任何掩盖real的更严格的this.real,那么我们需要查看inputFormat.createRecordReader(split, taskContext);如果这可以返回null那么它可能是罪魁祸首。

null为空时,它将返回backend

@Override
public RecordReader<LongWritable, Text> createRecordReader(
    InputSplit split, 
    TaskAttemptContext context) {

    if (backend == null) {
        logger.info("Unable to create a MyRecordReader, " + 
                    "it seems the environment was not properly set");
        return null;
    }

    // create a record reader
    return new MyRecordReader(backend, split, context);
}

看起来setEnvironmnet应该设置backend

# MyInputFormat.java

public static void setEnvironmnet(
    Job job, 
    String host, 
    String port, 
    boolean ssl, 
    String APIKey) {

    backend = new Backend(host, port, ssl, APIKey);
}
必须在backend之外的某处声明

setEnvironment(否则您将收到编译器错误)。

如果backend在构建时未设置为非空值且setEnvironmnet未在createRecordReader之前调用,那么您应该期望得到NullPointerException你有。

更新:

正如您所指出的,由于setEnvironmnet()是静态的,backend也必须是静态的。这意味着您必须确保其他实例未将其设置为null。

答案 1 :(得分:1)

解决。问题是backend变量被声明为static,即它属于到java类,因此任何其他对象都改变了该变量(例如null)影响同一类的所有其他对象。

现在,setEnvironment将主机,端口,ssl用法和API密钥添加为配置(与setResId已使用资源ID相同);当调用createRecordReader时,将获得此配置并创建backend对象。

感谢CandiedOrange让我走上了正确的道路!