SparkStreaming:避免检查点位置检查

时间:2018-06-19 21:07:59

标签: java scala apache-spark spark-streaming spark-structured-streaming

我正在编写一个库,以将Apache Spark与自定义环境集成。我正在实现自定义流媒体源和流媒体编写器。

至少在应用程序崩溃后,我正在开发的某些源无法恢复。如果重新启动应用程序,则需要重新加载所有数据。 因此,我们希望避免用户必须显式设置'checkpointLocation'选项。 但是,如果未提供该选项,则会看到以下错误:

org.apache.spark.sql.AnalysisException: checkpointLocation must be specified either through option("checkpointLocation", ...) or SparkSession.conf.set("spark.sql.streaming.checkpointLocation", ...);

但是,如果我使用控制台流输出,则一切正常。

有没有办法获得相同的行为?

注意:我们正在将Spark v2接口用于流读取器/写入器。


火花日志:

18/06/29 16:36:48 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/C:/mydir/spark-warehouse/').
18/06/29 16:36:48 INFO SharedState: Warehouse path is 'file:/C:/mydir/spark-warehouse/'.
18/06/29 16:36:48 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
org.apache.spark.sql.AnalysisException: checkpointLocation must be specified either through option("checkpointLocation", ...) or SparkSession.conf.set("spark.sql.streaming.checkpointLocation", ...);
    at org.apache.spark.sql.streaming.StreamingQueryManager$$anonfun$3.apply(StreamingQueryManager.scala:213)
    at org.apache.spark.sql.streaming.StreamingQueryManager$$anonfun$3.apply(StreamingQueryManager.scala:208)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:207)
    at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:299)
    at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:296)
    ...
18/06/29 16:36:50 INFO SparkContext: Invoking stop() from shutdown hook

这是我开始流媒体作业的方式:

spark.readStream().format("mysource").load()
  .writeStream().format("mywriter").outputMode(OutputMode.Append()).start();

一切正常,相反,例如,如果我运行:

spark.readStream().format("mysource").load()
  .writeStream().format("console").outputMode(OutputMode.Append()).start();

我无法共享数据写入器的完整代码。无论如何,我做了这样的事情:

class MySourceProvider extends DataSourceRegister with StreamWriteSupport {
  def createStreamWriter(queryId: String, schema: StructType, mode: OutputMode, options: DataSourceOptions): StreamWriter = {
    new MyStreamWriter(...)
  }
  def shortName(): String = {
    "mywriter"
  }
}

class MyStreamWriter(...) extends StreamWriter { 
  def abort(epochId: Long, messages: Array[WriterCommitMessage]): Unit = {}
  def commit(epochId: Long, messages: Array[WriterCommitMessage]): Unit = {}
  def createWriterFactory(): DataWriterFactory[Row] = {
    new MyDataWriterFactory()
  }
}

1 个答案:

答案 0 :(得分:1)

您需要在代码中添加checkpointLocation

  

option(“ checkpointLocation”,“ / tmp / vaquarkhan / checkpoint”)。 // <-   检查点目录

示例:

import org.apache.spark.sql.streaming.{OutputMode, Trigger}
import scala.concurrent.duration._
val q = records.
  writeStream.
  format("console").
  option("truncate", false).
  option("checkpointLocation", "/tmp/vaquarkhan/checkpoint"). // <-- checkpoint directory
  trigger(Trigger.ProcessingTime(10.seconds)).
  outputMode(OutputMode.Update).
  start

重新研究您的问题有以下三种选择:

  

.option(“ startingOffsets”,“ latest”)//从末尾读取数据   流

  • 最早的—从流的开头开始读取。这不包括已从Kafka删除的数据,因为它们早于保留期限(“过期”数据)。

  • latest —从现在开始,仅处理查询开始后到达的新数据。

  • 每个分区的分配-为每个分区指定从其开始的精确偏移量,从而可以精确地控制应从何处开始处理。例如,如果我们想准确地从其他系统或查询中停下来,则可以利用此选项。

如果找不到检查点位置的目录名称,则createQuery将报告AnalysisException。

checkpointLocation must be specified either through option("checkpointLocation", ...) or SparkSession.conf.set("spark.sql.streaming.checkpointLocation", ...)

以下是Apache Spark代码:

  private def createQuery(
      userSpecifiedName: Option[String],
      userSpecifiedCheckpointLocation: Option[String],
      df: DataFrame,
      extraOptions: Map[String, String],
      sink: BaseStreamingSink,
      outputMode: OutputMode,
      useTempCheckpointLocation: Boolean,
      recoverFromCheckpointLocation: Boolean,
      trigger: Trigger,
      triggerClock: Clock): StreamingQueryWrapper = {
    var deleteCheckpointOnStop = false
    val checkpointLocation = userSpecifiedCheckpointLocation.map { userSpecified =>
      new Path(userSpecified).toUri.toString
    }.orElse {
      df.sparkSession.sessionState.conf.checkpointLocation.map { location =>
        new Path(location, userSpecifiedName.getOrElse(UUID.randomUUID().toString)).toUri.toString
      }
    }.getOrElse {
      if (useTempCheckpointLocation) {
        // Delete the temp checkpoint when a query is being stopped without errors.
        deleteCheckpointOnStop = true
        Utils.createTempDir(namePrefix = s"temporary").getCanonicalPath
      } else {
        throw new AnalysisException(
          "checkpointLocation must be specified either " +
            """through option("checkpointLocation", ...) or """ +
            s"""SparkSession.conf.set("${SQLConf.CHECKPOINT_LOCATION.key}", ...)""")
      }
    }