用于处理多个水印的Spark策略

时间:2019-06-16 08:37:12

标签: apache-spark join bigdata spark-structured-streaming

我正在阅读Structured Streaming documentation

一方面,如果我做对了,他们在Policy for handling multiple watermarks下说,如果两个流上的水印不同,那么Spark将为它们两个使用最小值(默认)或最大值值(如果您明确指定的话)作为全局水印(因此Spark会忽略另一个水印)。

另一方面,在Inner Joins with optional Watermarking下,有两个带有不同水印的流的示例,他们说,将为每个流使用指定的水印(而不是将最小的或最大的作为整体两者都带有水印。

也许我不明白他们在Policy for handling multiple watermarks下真正尝试解释什么,因为他们说如果将multipleWatermarkPolicy设置为max,那么全局水印就会以最快的流,但它应该是完全相反的,因为较大的水印意味着流较慢。

1 个答案:

答案 0 :(得分:5)

据我所知,您想知道多个水印在联接操作中的行为,对吗?我这样,我做了一些深入的实现以找到答案。

全局使用的multipleWatermarkPolicy配置

spark.sql.streaming.multipleWatermarkPolicy属性被全局用于涉及多个水印的所有操作,其默认值为 min 。您可以通过查看WatermarkTracker#updateWatermark(executedPlan: SparkPlan)调用的MicroBatchExecution#runBatch方法来弄清楚。然后org.apache.spark.sql.execution.streaming.StreamExecution#runStream调用runBatch,updateWatermark是负责...流执行的类;)

updateWatermark实现

val watermarkOperators = executedPlan.collect { case e: EventTimeWatermarkExec => e } if (watermarkOperators.isEmpty) return watermarkOperators.zipWithIndex.foreach { case (e, index) if e.eventTimeStats.value.count > 0 => logDebug(s"Observed event time stats $index: ${e.eventTimeStats.value}") val newWatermarkMs = e.eventTimeStats.value.max - e.delayMs val prevWatermarkMs = operatorToWatermarkMap.get(index) if (prevWatermarkMs.isEmpty || newWatermarkMs > prevWatermarkMs.get) { operatorToWatermarkMap.put(index, newWatermarkMs) } // Populate 0 if we haven't seen any data yet for this watermark node. case (_, index) => if (!operatorToWatermarkMap.isDefinedAt(index)) { operatorToWatermarkMap.put(index, 0) } } 首先从物理计划中收集所有事件时间水印节点:

== Physical Plan ==
WriteToDataSourceV2 org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter@6a1dff1d
+- StreamingSymmetricHashJoin [mainKey#10730], [joinedKey#10733], Inner, condition = [ leftOnly = null, rightOnly = null, both = (mainEventTimeWatermark#10732-T4000ms >= joinedEventTimeWatermark#10735-T8000ms), full = (mainEventTimeWatermark#10732-T4000ms >= joinedEventTimeWatermark#10735-T8000ms) ], state info [ checkpoint = file:/tmp/temporary-3416be37-81b4-471a-b2ca-9b8f8593843a/state, runId = 17a4e028-29cb-41b0-b34b-44e20409b335, opId = 0, ver = 13, numPartitions = 200], 389000, state cleanup [ left value predicate: (mainEventTimeWatermark#10732-T4000ms <= 388999000), right = null ]
   :- Exchange hashpartitioning(mainKey#10730, 200)
   :  +- *(2) Filter isnotnull(mainEventTimeWatermark#10732-T4000ms)
   :     +- EventTimeWatermark mainEventTimeWatermark#10732: timestamp, interval 4 seconds
   :        +- *(1) Filter isnotnull(mainKey#10730)
   :           +- *(1) Project [mainKey#10730, mainEventTime#10731L, mainEventTimeWatermark#10732]
   :              +- *(1) ScanV2 MemoryStreamDataSource$[mainKey#10730, mainEventTime#10731L, mainEventTimeWatermark#10732]
   +- Exchange hashpartitioning(joinedKey#10733, 200)
      +- *(4) Filter isnotnull(joinedEventTimeWatermark#10735-T8000ms)
         +- EventTimeWatermark joinedEventTimeWatermark#10735: timestamp, interval 8 seconds
            +- *(3) Filter isnotnull(joinedKey#10733)
               +- *(3) Project [joinedKey#10733, joinedEventTime#10734L, joinedEventTimeWatermark#10735]
                  +- *(3) ScanV2 MemoryStreamDataSource$[joinedKey#10733, joinedEventTime#10734L, joinedEventTimeWatermark#10735]

要想出一个主意,流对流连接的物理计划应如下所示:

updateWatermark

后来,MinWatermark使用MaxWatermarkspark.sql.streaming.multipleWatermarkPolicy的可用水印策略之一,具体取决于您在MultipleWatermarkPolicy中设置的值。已在 def apply(policyName: String): MultipleWatermarkPolicy = { policyName.toLowerCase match { case DEFAULT_POLICY_NAME => MinWatermark case "max" => MaxWatermark case _ => throw new IllegalArgumentException(s"Could not recognize watermark policy '$policyName'") } } 随播对象中解决了该问题:

updateWatermark

// Update the global watermark to the minimum of all watermark nodes. // This is the safest option, because only the global watermark is fault-tolerant. Making // it the minimum of all individual watermarks guarantees it will never advance past where // any individual watermark operator would be if it were in a plan by itself. val chosenGlobalWatermark = policy.chooseGlobalWatermark(operatorToWatermarkMap.values.toSeq) if (chosenGlobalWatermark > globalWatermarkMs) { logInfo(s"Updating event-time watermark from $globalWatermarkMs to $chosenGlobalWatermark ms") globalWatermarkMs = chosenGlobalWatermark } else { logDebug(s"Event time watermark didn't move: $chosenGlobalWatermark < $globalWatermarkMs") } 使用已解决的策略来计算要应用于查询的水印:

[2019-07-05 08:30:09,729] org.apache.spark.internal.Logging$class INFO Streaming query made progress

其他

但是,我同意上一小段中的注释有点误导,因为它说的是“将全局水印更新为所有水印节点中的最小值”。 (https://github.com/apache/spark/blob/v2.4.3/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/WatermarkTracker.scala#L109

EventTimeWatermarkSuite中也声明了多个水印上的行为。尽管它适用于UNION,但在前两个部分中,您已经对所有组合操作以相同的方式更新了水印。

要自行调试,请检查日志中的以下条目:

  • eventTime-返回有关每个已执行查询的所有信息。在其watermark部分中,您将发现如果您使用min和max multipleWatermarkPolicy执行相同的查询,则[2019-07-05 08:30:35,685] org.apache.spark.internal.Logging$class INFO Updating event-time watermark from 0 to 6000 ms (org.apache.spark.sql.execution.streaming.WatermarkTracker:54)属性应该有所不同
  • @JsonComponent public class CustomerJsonComponent extends JsonSerializer<Customer> { @Override public void serialize(Customer value, JsonGenerator gen, SerializerProvider provider) throws IOException { gen.writeStartObject();//{ gen.writeObjectField("id", value.getId()); gen.writeObjectField("name", value.getName()); gen.writeEndObject();//} } } -表示水印刚刚更改。和以前一样,根据最小/最大属性应有所不同。

因此,要总结一下,从2.4.0开始,我们可以选择一个水印(最小或最大)。在2.4.0之前的版本中,最小水印是默认选择(SPARK-24730)。因此,独立于操作类型(内部联接,外部联接等),因为所有查询的水印解析方法都相同。