在Eclipse

时间:2016-08-14 20:05:25

标签: java hadoop nullpointerexception mapreduce

我在尝试执行一个简单的MapReduce程序时得到NullPointerException。我无法理解问题出在哪里?

package MapReduce.HadMapReduce;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class RecCount extends Configured implements Tool {

    public int run(String[] arg0) throws Exception {

        Job job = Job.getInstance(getConf());

        FileInputFormat.setInputPaths(job, new Path("C:\\singledeck.txt"));
        FileOutputFormat.setOutputPath(job, new Path("C:\\temp123"));

        return job.waitForCompletion(true) ? 0 : 1;
    }

    public static void main(String args[]) throws Exception {
        System.exit(ToolRunner.run(new RecCount(), args));
    }
}

错误是:

Exception in thread "main" java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:731)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:489)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:530)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:507)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
    at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at MapReduce.HadMapReduce.RecCount.run(RecCount.java:22)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at MapReduce.HadMapReduce.RecCount.main(RecCount.java:26)

这是幕后发生的逻辑:

ToolRunner正在调用以下run方法,此方法调用其他run方法(粘贴在此下方),如果是{{1},则设置配置}。

null

在上面的最后一个语句中,我调用了run方法,因为我实现了public static int run(Tool tool, String[] args) throws Exception { return run(tool.getConf(), tool, args); } public static int run(Configuration conf, Tool tool, String[] args) throws Exception { if (conf == null) { conf = new Configuration(); } GenericOptionsParser parser = new GenericOptionsParser(conf, args); // set the configuration back, so that Tool can configure itself tool.setConf(conf); // get the args w/o generic hadoop args String[] toolArgs = parser.getRemainingArgs(); return tool.run(toolArgs); } 接口。我的代码中没有发现任何错误。如果你能找到,请告诉我。

有人可以解释我的代码有什么问题吗?

0 个答案:

没有答案