如何在Spark 2.2中使用foreachPartition以避免任务序列化错误

时间:2017-12-15 11:00:00

标签: scala apache-spark apache-kafka spark-dataframe spark-streaming

我有以下使用Structured Streaming(Spark 2.2)的工作代码,以便从Kafka(0.10)读取数据。 在Task serialization problem内使用kafkaProducer时,我无法解决的唯一问题与ForeachWriter有关。 在为Spark 1.6开发的旧代码中,我使用foreachPartition,我为每个分区定义kafkaProducer以避免任务序列化问题。 我怎么能在Spark 2.2中做到这一点?

val df: Dataset[String] = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "localhost:9092")
      .option("subscribe", "test") 
      .option("startingOffsets", "latest")
      .option("failOnDataLoss", "true")
      .load()
      .selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)").as[(String, String)] 
      .map(_._2)

var mySet = spark.sparkContext.broadcast(Map(
  "metadataBrokerList"->metadataBrokerList,
  "outputKafkaTopic"->outputKafkaTopic,
  "batchSize"->batchSize,
  "lingerMS"->lingerMS))

val kafkaProducer = Utils.createProducer(mySet.value("metadataBrokerList"),
                                mySet.value("batchSize"),
                                mySet.value("lingerMS"))

val writer = new ForeachWriter[String] {

    override def process(row: String): Unit = {
         // val result = ...
         val record = new ProducerRecord[String, String](mySet.value("outputKafkaTopic"), "1", result);
        kafkaProducer.send(record)
    }

    override def close(errorOrNull: Throwable): Unit = {}

    override def open(partitionId: Long, version: Long): Boolean = {
      true
    }
}

val query = df
        .writeStream
        .foreach(writer)
        .start

query.awaitTermination()

spark.stop()

1 个答案:

答案 0 :(得分:1)

编写ForeachWriter的实现,然后使用它。 (避免使用不具有可序列化对象的匿名类 - 在您的情况下是其ProducerRecord)
示例:val writer = new YourForeachWriter[String]
这里还有一篇关于Spark序列化问题的有用文章:https://www.cakesolutions.net/teamblogs/demystifying-spark-serialisation-error