Hadoop Map Reduce-将Iterable <text>值上的嵌套循环减少到将它们写入上下文中时的文本忽略结果

时间:2018-09-05 09:15:50

标签: java hadoop mapreduce reducers

我是hadoop的新手,我试图在一个简单的输入文件上运行map reduce(请参见示例)。 我试图使用两个for循环从属性列表中制作某种笛卡尔积,但由于某种原因,我得到的结果值始终为空。 我尝试过使用它,最终只有在迭代结果时设置结果Text时,它才起作用(我知道,这对我来说也很奇怪)。 如果您能帮助我理解问题,可能是我做错了,我将不胜感激。

这是我拥有的输入文件。

A 1
B 2
C 1
D 2
C 2
E 1

我想得到以下输出:

1 A-C, A-E, C-E
2 B-C, B-D, C-D

所以我尝试实现以下map reduce类:     公共类DigitToPairOfLetters {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

但是此代码将为我提供以下空列表输出:

1
2

当我在for循环中添加set for result时,它似乎可以工作:     公共类DigitToPairOfLetters {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
                // TODO: We set the valuesList in the result since otherwise the 
hadoop process will ignore the values
                // in it.
                result.set(valuesList.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                    // TODO: We set the builder every iteration in the loop since otherwise the hadoop process will
                    // ignore the values
                    result.set(builder.toString());
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

这将给我以下结果:

1   [A C,A E,C E]
2   [B C,B D,C D]

感谢您的帮助

1 个答案:

答案 0 :(得分:0)

您的第一种方法似乎很好,您只需添加以下行:

result.set(builder.toString());

之前

context.write(key, result);

就像您在第二个功能中所做的一样。

Context.write刷新输出,由于result只是一个空对象,因此不会传递任何值作为值,只会传递键。因此,在传递之前,您需要在结果中设置值(A-E等)。