检查点目录下的子目录用于Spark结构化流

时间:2019-04-16 09:14:32

标签: apache-spark spark-structured-streaming

spark结构化流的检查点目录创建四个子目录。他们每个人都干什么?

/warehouse/test_topic/checkpointdir1/commits
/warehouse/test_topic/checkpointdir1/metadata
/warehouse/test_topic/checkpointdir1/offsets
/warehouse/test_topic/checkpointdir1/sources

1 个答案:

答案 0 :(得分:0)

来自StreamExecution类文档:

/**
   * A write-ahead-log that records the offsets that are present in each batch. In order to ensure
   * that a given batch will always consist of the same data, we write to this log *before* any
   * processing is done.  Thus, the Nth record in this log indicated data that is currently being
   * processed and the N-1th entry indicates which offsets have been durably committed to the sink.
   */
  val offsetLog = new OffsetSeqLog(sparkSession, checkpointFile("offsets"))

  /**
   * A log that records the batch ids that have completed. This is used to check if a batch was
   * fully processed, and its output was committed to the sink, hence no need to process it again.
   * This is used (for instance) during restart, to help identify which batch to run next.
   */
  val commitLog = new CommitLog(sparkSession, checkpointFile("commits"))

元数据日志用于获取与查询有关的信息。例如在KafkaSource中,它用于写入查询的起始偏移量(每个分区的偏移量)