Spark Kafka流0.8间接流KafkaUtils.createStream updateWaterMark是否将偏移量保存到Zookeeper?

时间:2019-03-18 13:13:10

标签: apache-spark apache-kafka spark-streaming apache-zookeeper

我有义务使用

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>

使用不赞成使用的功能

val kafkaStream = KafkaUtils.createStream(streamingContext, zkArgs, consumerGroupId, topicMap)

kafkaStream.foreachRDD(rdd => {

  val sqlContext = new SQLContext(sc)

我了解到手动使用水印是这样做的:

//      enabling watermarking upon success
val sparkConf = new SparkConf()
  ....
  .set("zookeeper.hosts", zkArgs)
  .set("enable.auto.commit", "false")
  ....

df.withWatermark("eventTime", "10 minutes")
  .write .....

跟随着课程线索,我进入了诸如EventTimeWatermark之类的课程...

在另一个地方,我读到我应该自己写偏移量,例如:

def saveOffsets(zkClient:  ZkClient, zkPath: String, rdd: RDD[_]): Unit = {
  val offsetsRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
  val offsetsRangesStr = offsetsRanges.map(offsetRange => s"${offsetRange.partition}:${offsetRange.fromOffset}")
  .mkString(",")

  ZkUtils.updatePersistentPath(zkClient, zkPath, offsetsRangesStr)
}

df.withWatermark("eventTime", "10 minutes")
      .write

.....最终更新Zookeeper中的水印?还是在集群上运行火花的另一种机制?

1 个答案:

答案 0 :(得分:1)

由于仅在Spark流中进行了加水印,因此从Kafka中拾取的较晚消息仅在Spark中被忽略。

Kafka偏移量会在读取消息时更新。

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#handling-late-data-and-watermarking