Hadoop文件读取

时间:2014-08-25 10:16:01

标签: hadoop

hadoop 2.2.0中的Hadoop Distributed Cache Wordcount示例。 将文件复制到hdfs文件系统中,以便在mapper类的设置中使用。

protected void setup(Context context) throws IOException,InterruptedException 
{
      Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration());
      cacheData=new HashMap<String, String>();

      for(Path urifile: uris)
      {   
      try
      {

        BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString()));
        String line;
        while ((line=readBuffer1.readLine())!=null)
        {      System.out.println("**************"+line);
               cacheData.put(line,line);
        }
        readBuffer1.close(); 
        }       
       catch (Exception e)
       {
                  System.out.println(e.toString());
       }
      }

}

Inside Driver Main class

    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
    if (otherArgs.length != 3)
    {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word_count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    Path outputpath=new Path(otherArgs[1]);
    outputpath.getFileSystem(conf).delete(outputpath,true);
    FileOutputFormat.setOutputPath(job,outputpath);
    System.out.println("CachePath****************"+otherArgs[2]);
    DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration());
    System.exit(job.waitForCompletion(true) ? 0 : 1);

但是获得例外

java.io.FileNotFoundException:file:/ home / user12 / tmp / mapred / local / 1408960542382 / cache(没有这样的文件或目录)

因此缓存功能无法正常工作。 有什么想法吗?

1 个答案:

答案 0 :(得分:0)

解决问题。 错误地给出了文件位置。 现在工作正常。