在设置超时的情况下,PairDStreamFunctions.mapWithState失败java.util.NoSuchElementException:None.get

时间:2016-02-25 06:42:18

标签: sparkcore

嗨我正在使用具有超时功能的mapwithstate api,当超时间隔达到理想数据时,我得到下面提到的异常

我正在使用此位置https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaStatefulNetworkWordCount.java

的示例

但进行了一些更改:  1. org.apache.spark.api.java.Optional类在1.6中不可用所以我使用的是guava库 2.我使用了超时功能

下面的

是代码的一部分:

JavaPairDStream<String, Integer> wordsDstream = words.mapToPair(
        new PairFunction<String, String, Integer>() {
          @Override
          public Tuple2<String, Integer> call(String s) {
            return new Tuple2<>(s, 1);
          }
        });


**// Update the cumulative count function
Function3<String, Optional<Integer>, State<Integer>, Tuple2<String, Integer>> mappingFunc =
    new Function3<String, Optional<Integer>, State<Integer>, Tuple2<String, Integer>>() {
      @Override
      public Tuple2<String, Integer> call(String word, Optional<Integer> one, State<Integer> state) {


        int sum = one.or(0) + (state.exists() ? state.get() : 0);
        Tuple2<String, Integer> output = new Tuple2<>(word, sum);
        state.update(sum);
        return output;
      }
    };


// DStream made of get cumulative counts that get updated in every batch
JavaMapWithStateDStream<String, Integer, Integer, Tuple2<String, Integer>> stateDstream =
wordsDstream.mapWithState(StateSpec.function(mappingFunc).initialState(initialRDD).timeout(new Duration(1000) ));**

当我运行上面提到的代码时,我得到了下面提到的异常

16/02/25 11:41:33 ERROR Executor: Exception in task 0.0 in stage 157.0 (TID 22)
java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:313)
        at scala.None$.get(Option.scala:311)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:222)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:221)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:180)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:179)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:71)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:69)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$.updateRecordWithData(MapWithStateRDD.scala:69)
        at org.apache.spark.streaming.rdd.MapWithStateRDD.compute(MapWithStateRDD.scala:154)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
16/02/25 11:41:33 WARN TaskSetManager: Lost task 0.0 in stage 157.0 (TID 22, localhost): java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:313)
        at scala.None$.get(Option.scala:311)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:222)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:221)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:180)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:179)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:71)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:69)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$.updateRecordWithData(MapWithStateRDD.scala:69)
        at org.apache.spark.streaming.rdd.MapWithStateRDD.compute(MapWithStateRDD.scala:154)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

16/02/25 11:41:33 ERROR TaskSetManager: Task 0 in stage 157.0 failed 1 times; aborting job
16/02/25 11:41:33 ERROR JobScheduler: Error running job streaming job 1456380693000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 157.0 failed 1 times, most recent failure: Lost task 0.0 in stage 157.0 (TID 22, localhost): java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:313)
        at scala.None$.get(Option.scala:311)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:222)
        at org.apache.spark.streaming.StateSpec$$anonfun$3.apply(StateSpec.scala:221)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:180)
        at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:179)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:71)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$2.apply(MapWithStateRDD.scala:69)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$.updateRecordWithData(MapWithStateRDD.scala:69)
        at org.apache.spark.streaming.rdd.MapWithStateRDD.compute(MapWithStateRDD.scala:154)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

2 个答案:

答案 0 :(得分:1)

我想您的问题应该修复,请参阅https://github.com/apache/spark/pull/11081

所以你可能想尝试一个包含该修复的版本,你可以通过克隆并在branch-1.6中构建当前的spark版本来获得 - https://github.com/apache/spark/tree/branch-1.6

答案 1 :(得分:0)

抱歉,我直接使用超时功能, 超时功能需要对传递给mapWithState的映射函数进行一些更改,这是我们需要使用的映射函数

   Function4<Time, String, Optional<Integer>, State<Integer>, Optional<Tuple2<String, Integer>>> mappingFunc=
                new Function4<Time, String, Optional<Integer>, State<Integer>, Optional<Tuple2<String, Integer>>>() {

                @Override
                public Optional<Tuple2<String, Integer>> call(Time arg0,
                        String word, Optional<Integer> one,
                        State<Integer> state) throws Exception {
                    // TODO Auto-generated method stub
                    int sum = one.or(0) + (state.exists() ? state.get() : 0); 
                    Tuple2<String, Integer> output = new Tuple2<>(word, sum);
                    if(state.isTimingOut())
                    {

                        return Optional.of(output);

                    }
                    else
                    {
                        state.update(sum);

                    }
                    return Optional.of(output);
                }
            };