Ubuntu 12.04 - Eclispe 3.8- hadoop-1.2.1-输入路径不存在

时间:2013-11-08 08:40:47

标签: eclipse hadoop

我确实设置了hadoop Ubuntu OS,遵循所有必要的步骤,1。创建了hdfs文件系统2.将文本文件移动到输入目录3.having特权访问所有目录。但是当运行简单的单词计数示例时,我得到了:

 import java.io.IOException;
 import java.util.*;

 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.conf.*;
 import org.apache.hadoop.io.*;
 import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class wordcount {

 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 } 

 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }

 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    conf.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));
    conf.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setJarByClass(wordcount.class);

    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);





 // FileInputFormat.addInputPath(job, new Path(args[0]));
 //  FileOutputFormat.setOutputPath(job, new Path(args[1]));

    FileInputFormat.setInputPaths(job, new Path("/user/gabriele/input"));
    FileOutputFormat.setOutputPath(job, new Path("/user/gabriele/output"));


    job.waitForCompletion(true);
 }

}

但是,输入路径是有效的(也可以从命令行查看),甚至可以从eclipse本身查看该路径中的文件,所以PLZ协助我错了。

有一个解决方案可以添加以下两行:

config.addResource(new Path(“/ HADOOP_HOME / conf / core-site.xml”)); config.addResource(new Path(“/ HADOOP_HOME / conf / hdfs-site.xml”));

但仍然无效。

这里的错误:运行为 - &gt;在hadoop上运行

13/11/08 08:39:11 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...使用适用的builtin-java类 13/11/08 08:39:12 WARN mapred.JobClient:使用GenericOptionsParser解析参数。应用程序应该实现相同的工具。 13/11/08 08:39:12 WARN mapred.JobClient:没有工作jar文件集。可能找不到用户类。请参阅JobConf(Class)或JobConf#setJar(String)。 13/11/08 08:39:12 INFO mapred.JobClient:清理临时区域文件:/tmp/hadoop-gabriele/mapred/staging/gabriele481581440/.staging/job_local481581440_0001 13/11/08 08:39:12错误security.UserGroupInformation:PriviledgedActionException as:gabriele cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入路径不存在:file:/ user / gabriele / input 线程“main”中的异常org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入路径不存在:file:/ user / gabriele / input     at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:235)     at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)     在org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)     在org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)     在org.apache.hadoop.mapred.JobClient.access $ 700(JobClient.java:179)     在org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:983)     在org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:936)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:415)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)     在org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)     在org.apache.hadoop.mapreduce.Job.submit(Job.java:550)     在org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)     在wordcount.main(wordcount.java:74)

感谢

1 个答案:

答案 0 :(得分:0)

除非您的Hadoop安装确实以/ HADOOP_HOME为根,否则我建议您更改以下行,以便将HADOOP_HOME替换为实际安装Hadoop的位置(/ usr / lib / hadoop,/ opt / hadoop或安装它的位置) ):

conf.addResource(new Path("/usr/lib/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/usr/lib/hadoop/conf/hdfs-site.xml"));

或者在Eclipse中,将/ usr / lib / hadoop / conf文件夹(或者已安装hadoop的位置)添加到Build类路径中。