Windows上的Hadoop:获取异常“不是有效的DFS文件名”

时间:2016-12-24 07:45:26

标签: java hadoop

我是hadoop&的新手在初始阶段挣扎。  在eclipse中,我编写了字数统计程序并为wordcount程序创建了JAR。

我正在尝试使用以下hadoop命令运行它:

$ ./hadoop jar C:/cygwin64/home/PAKU/hadoop-1.2.1/wordcount.jar com.hadoopexpert.WordCountDriver file:///C:/cygwin64/home/PAKU/work/hadoopdata/tmp/dfs/ddata/file.txt file:///C:/cygwin64/home/PAKU/hadoop-dir/datadir/tmp/output

我得到的例外情况如下:

Exception in thread "main" java.lang.IllegalArgumentException: Pathname /C:/cygwin64/home/PAKU/work/hadoopdata/tmp/mapred/staging/PAKU/.staging from hdfs://localhost:50000/C:/cygwin64/home/PAKU/work/hadoopdata/tmp/mapred/staging/PAKU/.staging is not a valid DFS filename.
        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:143)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:554)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:788)
        at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:109)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Unknown Source)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
        at com.hadoopexpert.WordCountDriver.main(WordCountDriver.java:30)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

注意:我正在使用cygwin在Windows上运行hadoop。

代码:

public class WordCountDriver {
    public static void main(String[] args) {
        try {
            Job job = new Job();
            job.setMapperClass(WordCountMapper.class);
            job.setReducerClass(WordCountReducer.class);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(IntWritable.class);

            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);

            job.setJarByClass(WordCountDriver.class);

            FileInputFormat.setInputPaths(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            try {
                System.exit(job.waitForCompletion(true) ? 0 :-1);
            } catch (ClassNotFoundException e) {
                e.printStackTrace();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            } catch (IOException e) {
            e.printStackTrace();
        }
    }
}


public class WordCountReducer extends Reducer<Text,IntWritable, Text, IntWritable>{
    public void reduce(Text key, Iterable<IntWritable> value, Context context){
        int total = 0;
        while(value.iterator().hasNext()){
            IntWritable i = value.iterator().next();
            int i1= i.get();
            total += i1;
        }
        try {
            context.write(key, new IntWritable(total));
        } catch (IOException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

    }
}


public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
    public void map(LongWritable key, Text value, Context context){
        String s = value.toString();
        for(String word :s.split(" ")){
            Text text = new Text(word);
            IntWritable intW = new IntWritable(1);
            try {
                context.write(text, intW);
            } catch (IOException e) {
                e.printStackTrace();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }  
}

任何人都可以帮助我运行我的第一个hadoop计划。

提前致谢。

2 个答案:

答案 0 :(得分:2)

您已为FileInputFormatFileOutputFormat指定了本地路径。

将文件放在hdfs中,然后使用hdfs路径。

步骤:

  1. 第一个put(或copyFromLocal)文件到hdfs:

    hdfs dfs -put /local/file/locaion hdfs://ip_add:port/hdfs_location
    
  2. 您可以使用ls检查文件:

    hdfs dfs -ls /hdfs_location/
    
  3. 现在将hdfs位置作为输入的参数,并为输出提供一个新目录。

答案 1 :(得分:0)

我认为您尚未在hdfs中上传文件。你可以使用hadoop的put命令来做到这一点。一旦文件将在hdfs目录中,那么我认为它应该可以工作