Hadoop JobControl,第二项工作不起作用

时间:2019-03-13 21:43:47

标签: java hadoop mapreduce

我正在尝试工作2地图和减少工作,这是第二工作取决于第一。实际上,我正在尝试通过第一个mapreduce作业来计数电影收视率,并在第二个作业中对其进行排序。 但是某些原因导致第二张地图和第二张还原作业无法正常工作。

我的第一个Mapper

public class MovieRatingsMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

    private static final IntWritable ONE = new IntWritable(1);

    @Override
    public void map(LongWritable longWritable, Text text, Context context) {
        String line = text.toString();
        String[] words = line.split("\t");

        context.write(new Text(words[1]), ONE); 
    }
}

一级减速器

    public class MovieRatingsReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

        @Override
        public void reduce(Text key, Iterable<IntWritable> values, Context context){
            //System.out.println("movie rating reducer is working");
            String word = key.toString();
            int totalCount = 0;
            for (IntWritable value : values) {
                int count = value.get();
                totalCount += count;
            }

            context.write(new Text(word), new IntWritable(totalCount));
        }
    }

第二个映射器

public class MovieRatingsSortMapper extends Mapper<Text, Text, IntWritable, Text> {

    IntWritable frequency = new IntWritable();

    @Override
    public void map(Text key, Text value, Context context) {

        System.out.println("Movie Rating sort Mapper is working");

        int newVal = Integer.parseInt(value.toString());
        frequency.set(newVal);

        context.write(frequency, key);
    }
}

第二个减速器

public class MovieRatingsSortReducer extends Reducer<IntWritable, Text, IntWritable, Text> {

    Text word = new Text();

    @Override
    public void reduce(IntWritable key, Iterable<Text> values, Context context) {

        System.out.println("movie rating sort reducer is working");

        for(Text value : values) {
            word.set(value);
            context.write(key, word);
        }
    }
}

驱动程序,主要方法

public class MovireRatingsSortDriver extends Configured implements Tool {

    public int run(String[] args) throws Exception {

        Path inputDirPath = new Path("src/main/resources/input/movieratings/");
        Path outputDirPath = new Path("src/main/resources/output/movieratings/temp/");

        Path inputDirPath2 = new Path("src/main/resources/output/movieratings/temp/");
        Path outputDirPath2 = new Path("src/main/resources/output/movieratings/");

        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "file:/");
        conf.set("mapreduce.framework.name", "local");
        FileSystem fs = FileSystem.getLocal(conf);
        fs.delete(outputDirPath, true);
        fs.delete(outputDirPath2, true);
        fs.setWriteChecksum(false);

        JobControl jobControl = new JobControl("jobChain");
        Configuration conf1 = getConf();

        Job job1 = Job.getInstance(conf1);
        job1.setJarByClass(MovireRatingsSortDriver.class);
        job1.setJobName("MovieRatings");

        FileInputFormat.addInputPath(job1, inputDirPath);
        FileOutputFormat.setOutputPath(job1, outputDirPath);

        job1.setMapperClass(MovieRatingsMapper.class);
        job1.setCombinerClass(MovieRatingsReducer.class);
        job1.setNumReduceTasks(1);
        job1.setOutputKeyClass(Text.class);
        job1.setOutputValueClass(IntWritable.class);

        ControlledJob controlledJob1 = new ControlledJob(job1.getConfiguration());
        controlledJob1.setJob(job1);

        jobControl.addJob(controlledJob1);

        Configuration conf2 = getConf();

        Job job2 = Job.getInstance(conf2);
        job2.setJarByClass(MovireRatingsSortDriver.class);
        job2.setJobName("Sorter");

        FileInputFormat.addInputPath(job2, inputDirPath2);
        FileOutputFormat.setOutputPath(job2, outputDirPath2);

        job2.setMapperClass(MovieRatingsSortMapper.class);
        job2.setReducerClass(MovieRatingsSortReducer.class);

        job2.setOutputKeyClass(IntWritable.class);
        job2.setOutputValueClass(Text.class);
        job2.setInputFormatClass(KeyValueTextInputFormat.class);     

        job2.setNumReduceTasks(1);

        ControlledJob controlledJob2 = new ControlledJob(job2.getConfiguration());
        controlledJob2.setJob(job2);

        // make job2 dependent on job1
        controlledJob2.addDependingJob(controlledJob1);

        // add the job to the job control
        jobControl.addJob(controlledJob2);

        Thread jobControlThread = new Thread(jobControl);
        jobControlThread.start();

        while (!jobControl.allFinished()){
            Thread.sleep(500);
        }

        jobControl.stop();
        return 0;



    }

    public static void main(String[] args) throws Exception {
        int exitCode = ToolRunner.run(new MovireRatingsSortDriver(), args);
        System.exit(exitCode);

    }

}

第一个映射器和reducer运行良好,并且将结果文件创建到/ movieratings / temp,但是第二个映射器永远不会在没有错误或任何信息的情况下启动。

您能给我一些想法,为什么第二份工作不起作用吗? 首先我想,在第一个reduce中,我给出结果,但是在第二个mapper中,我得到输入 我更改为,但没有任何更改。现在我没主意了。

0 个答案:

没有答案