Spark Java累加器不递增

时间:2016-06-01 04:13:58

标签: java apache-spark bigdata

刚开始使用Spark-Java中的婴儿步骤。下面是一个单词计数程序,其中包含一个单词列表,可以跳过列表中的单词。我有2个累加器来计算跳过的单词和未翻录的单词。

但是,程序末尾的Sysout始终会将累加器值都设为0

请指出我出错的地方。

public static void main(String[] args) throws FileNotFoundException {

        SparkConf conf = new SparkConf();
        conf.setAppName("Third App - Word Count WITH BroadCast and Accumulator");
        JavaSparkContext jsc = new JavaSparkContext(conf);
        JavaRDD<String> fileRDD = jsc.textFile("hello.txt");
        JavaRDD<String> words = fileRDD.flatMap(new FlatMapFunction<String, String>() {

            public Iterable<String> call(String aLine) throws Exception {
                return Arrays.asList(aLine.split(" "));
            }
        });

        String[] stopWordArray = getStopWordArray();

         final Accumulator<Integer> skipAccumulator = jsc.accumulator(0);
         final Accumulator<Integer> unSkipAccumulator = jsc.accumulator(0);

        final Broadcast<String[]> stopWordBroadCast = jsc.broadcast(stopWordArray);

        JavaRDD<String> filteredWords = words.filter(new Function<String, Boolean>() {

            public Boolean call(String inString) throws Exception {
                boolean filterCondition = !Arrays.asList(stopWordBroadCast.getValue()).contains(inString);
                if(!filterCondition){
                    System.out.println("Filtered a stop word ");
                    skipAccumulator.add(1);
                }else{
                    unSkipAccumulator.add(1);
                }
                return filterCondition;

            }
        });

        System.out.println("$$$$$$$$$$$$$$$Filtered Count "+skipAccumulator.value());
        System.out.println("$$$$$$$$$$$$$$$ UN Filtered Count "+unSkipAccumulator.value());

        /* rest of code - works fine */
        jsc.stop();
        jsc.close();
        }

我正在制作一个可运行的jar并使用

在Hortonworks Sandbox 2.4上提交作业
spark-submit jarname

------------ ---------------- EDIT

注释部分中的代码的REST代码

JavaPairRDD<String, Integer> wordOccurrence = filteredWords.mapToPair(new PairFunction<String, String, Integer>() {

            public Tuple2<String, Integer> call(String inWord) throws Exception {
                return new Tuple2<String, Integer>(inWord, 1);
            }
        });

        JavaPairRDD<String, Integer> summed = wordOccurrence.reduceByKey(new Function2<Integer, Integer, Integer>() {

            public Integer call(Integer a, Integer b) throws Exception {
                return a+b;
            }
        });

        summed.saveAsTextFile("hello-out");

1 个答案:

答案 0 :(得分:1)

您错过了发布重要部分/* rest of code - works fine */。我可以保证你在其余的代码中调用一些动作。这会触发DAG使用累加器执行代码。尝试在println之前添加filteredWords.collect,您应该看到输出。请记住,Spark在转换时很懒惰,只能在操作上执行。