TotalOrderPartitioner忽略分区文件位置

时间:2014-07-31 05:51:42

标签: java sorting hadoop mapreduce

我试图用TotalOrderPartitioner做一个简单的排序示例。输入是一个序列文件,其中IntWritable为关键字,NullWritable为值。我想基于密钥排序。输出是一个序列文件,其中IntWritable为关键字,NullWritable为值。我在集群环境中运行这项工作。这是我的驱动程序类:

public class SortDriver extends Configured implements Tool {

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = this.getConf();

        Job job = Job.getInstance(conf);
        job.setJobName("SORT-WITH-TOTAL-ORDER-PARTITIONER");
        job.setJarByClass(SortDriver.class);
        job.setInputFormatClass(SequenceFileInputFormat.class);
        SequenceFileInputFormat.setInputPaths(job, new Path("/user/client/seq-input"));
        job.setMapOutputKeyClass(IntWritable.class);
        job.setMapOutputValueClass(NullWritable.class);
        job.setMapperClass(SortMapper.class);
        job.setReducerClass(SortReducer.class);
        job.setPartitionerClass(TotalOrderPartitioner.class);
        TotalOrderPartitioner.setPartitionFile(conf, new Path("/user/client/partition.lst"));
        job.setOutputFormatClass(SequenceFileOutputFormat.class);
        SequenceFileOutputFormat.setCompressOutput(job, true);
        SequenceFileOutputFormat.setOutputCompressionType(job, SequenceFile.CompressionType.BLOCK);
        SequenceFileOutputFormat.setOutputCompressorClass(job, BZip2Codec.class);
        SequenceFileOutputFormat.setOutputPath(job, new Path("/user/client/sorted-output"));
        job.setOutputKeyClass(IntWritable.class);
        job.setOutputValueClass(NullWritable.class);
        job.setNumReduceTasks(3);

        InputSampler.Sampler<IntWritable, NullWritable> sampler = new InputSampler.RandomSampler<>(0.1, 200);
        InputSampler.writePartitionFile(job, sampler);

        boolean res = job.waitForCompletion(true);

        return res ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        System.exit(ToolRunner.run(new Configuration(), new SortDriver(), args));
    }
}

Mapper上课:

public class SortMapper extends Mapper<IntWritable, NullWritable, IntWritable, NullWritable>{

    @Override
    protected void map(IntWritable key, NullWritable value, Context context) throws IOException, InterruptedException {
        context.write(key, value);
    }
}

Reducer上课:

public class SortReducer extends Reducer<IntWritable, NullWritable, IntWritable, NullWritable> {

    @Override
    protected void reduce(IntWritable key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
        context.write(key, NullWritable.get());
    }
}

当我开始这份工作时,我得到了:

Error: java.lang.IllegalArgumentException: Can't read partitions file
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:116)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:678)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:747)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.FileNotFoundException: File file:/grid/hadoop/yarn/local/usercache/client/appcache/application_1406784047304_0002/container_1406784047304_0002_01_000003/_partition.lst does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:511)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:724)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:501)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1749)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1773)
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:301)
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:88)
    ... 10 more

我在主目录(/user/client)中找到了名为_partition.lst的分区文件。分区文件名与代码TotalOrderPartitioner.setPartitionFile(conf, new Path("/user/client/partition.lst"));不匹配。任何人都可以帮我解决这个问题吗?我在HDP 2.1发行版中使用hadoop 2.4。

3 个答案:

答案 0 :(得分:2)

我认为问题在于:

TotalOrderPartitioner.setPartitionFile(conf, new Path("/user/client/partition.lst"));

您必须将其替换为:

TotalOrderPartitioner.setPartitionFile(job.getConfiguration(), new Path("/user/client/partition.lst"));

因为你正在使用

InputSampler.writePartitionFile(job, sampler);

否则,请尝试仅使用以下内容替换最后一行

InputSampler.writePartitionFile(conf, sampler);

但我不确定它是否与新API一样。

希望它有所帮助!祝你好运!

答案 1 :(得分:0)

当我使用hadoop mapreduce并且尚未安装和启动mapreduce服务时,我也发现了这个错误。安装mapreduce并启动后,异常消失了。

答案 2 :(得分:0)

当我有job.setNumReduceTasks(3)时,

出现了这个错误;并以独立模式运行我的代码

将其更改为job.setNumReduceTasks(1)并在独立模式下正常工作