MapWithState给出java.lang.ClassCastException:从检查点恢复时无法强制转换org.apache.spark.util.SerializableConfiguration

时间:2017-08-01 16:54:06

标签: apache-spark serialization spark-streaming broadcast checkpointing

我正面临火花流媒体工作的问题,我试图在火花中一起使用广播 mapWithState 检查点

以下是用法:

  • 由于我必须将一些连接对象(不是Serializable)传递给执行者,我使用 org.apache.spark.broadcast.Broadcast
  • 由于我们必须维护一些缓存信息,因此我使用了mapWithState
  • 的有状态流
  • 此外,我正在使用流式上下文的检查点

我还需要将广播的连接对象传递给mapWithState,以便从外部源获取一些数据。

当新创建上下文时,流程正常工作。但是,当我崩溃应用程序并尝试从检查点恢复时,我得到一个ClassCastException。

我已根据example from asyncified.io添加了一个小代码段,以便在github中重现该问题:

  • 我的广播逻辑是 yuvalitzchakov.utils.KafkaWriter.scala
  • 该应用程序的虚拟逻辑是 yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast.scala

代码的虚拟片段:

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("spark-stateful-example")

...
val prop = new Properties()
...

val config: Config = ConfigFactory.parseString(prop.toString)
val sc = new SparkContext(sparkConf)
val ssc = StreamingContext.getOrCreate(checkpointDir, () =>  {

    println("creating context newly")

    clearCheckpoint(checkpointDir)

    val streamingContext = new StreamingContext(sc, Milliseconds(batchDuration))
    streamingContext.checkpoint(checkpointDir)

    ...
    val kafkaWriter = SparkContext.getOrCreate().broadcast(kafkaErrorWriter)
    ...
    val stateSpec = StateSpec.function((key: Int, value: Option[UserEvent], state: State[UserSession]) =>
        updateUserEvents(key, value, state, kafkaWriter)).timeout(Minutes(jobConfig.getLong("timeoutInMinutes")))

    kafkaTextStream
    .transform(rdd => {
        offsetsQueue.enqueue(rdd.asInstanceOf[HasOffsetRanges].offsetRanges)
        rdd
    })
    .map(deserializeUserEvent)
    .filter(_ != UserEvent.empty)
    .mapWithState(stateSpec)
    .foreachRDD { rdd =>
        ...
        some logic
        ...

    streamingContext
    })
}

ssc.start()
ssc.awaitTermination()


def updateUserEvents(key: Int,
                     value: Option[UserEvent],
                     state: State[UserSession],
                     kafkaWriter: Broadcast[KafkaWriter]): Option[UserSession] = {

    ...
    kafkaWriter.value.someMethodCall()
    ...
}

时出现以下错误
  

kafkaWriter.value.someMethodCall()

执行

17/08/01 21:20:38 ERROR Executor: Exception in task 2.0 in stage 3.0 (TID 4)
java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to yuvalitzchakov.utils.KafkaWriter
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$.updateUserSessions$1(SparkStatefulRunnerWithBroadcast.scala:144)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$.updateUserEvents(SparkStatefulRunnerWithBroadcast.scala:150)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$$anonfun$2.apply(SparkStatefulRunnerWithBroadcast.scala:78)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$$anonfun$2.apply(SparkStatefulRunnerWithBroadcast.scala:77)
    at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:181)
    at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:180)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$1.apply(MapWithStateRDD.scala:57)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$1.apply(MapWithStateRDD.scala:55)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$.updateRecordWithData(MapWithStateRDD.scala:55)
    at org.apache.spark.streaming.rdd.MapWithStateRDD.compute(MapWithStateRDD.scala:159)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1005)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:996)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:936)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:996)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:700)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

基本上 kafkaWriter 是广播变量,而 kafkaWriter.value 应该返回广播变量,但它返回 SerializableCongiguration ,但未获得投放到期望的对象

提前感谢您的帮助!

1 个答案:

答案 0 :(得分:0)

如果我们需要从Spark流中的checkpoint目录中恢复,则广播变量不能与MapwithState(一般的转换操作)一起使用。它只能在输出操作中使用,因为它需要Spark上下文来懒惰地初始化广播

filterfalse