喜欢玩Spark结构化流和mapGroupsWithState
(特别是在Spark源代码中的StructuredSessionization示例之后)。根据我的用例,我想确认我认为mapGroupsWithState
存在一些限制。
对我来说,会话是指一组不间断的用户活动,因此两个按时间顺序排列(按事件时间而不是处理时间)的事件之间的间隔不会超过开发人员定义的持续时间(通常30分钟)。
在进入代码之前,一个示例将有所帮助:
{"event_time": "2018-01-01T00:00:00", "user_id": "mike"}
{"event_time": "2018-01-01T00:01:00", "user_id": "mike"}
{"event_time": "2018-01-01T00:05:00", "user_id": "mike"}
{"event_time": "2018-01-01T00:45:00", "user_id": "mike"}
对于上述流,会话定义为30分钟的不活动时间。在流媒体环境中,我们应该以一个会话结束(第二个会话尚未完成):
[
{
"user_id": "mike",
"startTimestamp": "2018-01-01T00:00:00",
"endTimestamp": "2018-01-01T00:05:00"
}
]
现在考虑以下Spark驱动程序:
import java.sql.Timestamp
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.execution.streaming.MemoryStream
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.functions._
import org.apache.spark.sql.streaming.{GroupState, GroupStateTimeout}
object StructuredSessionizationV2 {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.master("local[2]")
.appName("StructredSessionizationRedux")
.getOrCreate()
spark.sparkContext.setLogLevel("WARN")
import spark.implicits._
implicit val ctx = spark.sqlContext
val input = MemoryStream[String]
val EVENT_SCHEMA = new StructType()
.add($"event_time".string)
.add($"user_id".string)
val events = input.toDS()
.select(from_json($"value", EVENT_SCHEMA).alias("json"))
.select($"json.*")
.withColumn("event_time", to_timestamp($"event_time"))
.withWatermark("event_time", "1 hours")
events.printSchema()
val sessionized = events
.groupByKey(row => row.getAs[String]("user_id"))
.mapGroupsWithState[SessionState, SessionOutput](GroupStateTimeout.EventTimeTimeout) {
case (userId: String, events: Iterator[Row], state: GroupState[SessionState]) =>
println(s"state update for user ${userId} (current watermark: ${new Timestamp(state.getCurrentWatermarkMs())})")
if (state.hasTimedOut) {
println(s"User ${userId} has timed out, sending final output.")
val finalOutput = SessionOutput(
userId = userId,
startTimestampMs = state.get.startTimestampMs,
endTimestampMs = state.get.endTimestampMs,
durationMs = state.get.durationMs,
expired = true
)
// Drop this user's state
state.remove()
finalOutput
} else {
val timestamps = events.map(_.getAs[Timestamp]("event_time").getTime).toSeq
println(s"User ${userId} has new events (min: ${new Timestamp(timestamps.min)}, max: ${new Timestamp(timestamps.max)}).")
val newState = if (state.exists) {
println(s"User ${userId} has existing state.")
val oldState = state.get
SessionState(
startTimestampMs = math.min(oldState.startTimestampMs, timestamps.min),
endTimestampMs = math.max(oldState.endTimestampMs, timestamps.max)
)
} else {
println(s"User ${userId} has no existing state.")
SessionState(
startTimestampMs = timestamps.min,
endTimestampMs = timestamps.max
)
}
state.update(newState)
state.setTimeoutTimestamp(newState.endTimestampMs, "30 minutes")
println(s"User ${userId} state updated. Timeout now set to ${new Timestamp(newState.endTimestampMs + (30 * 60 * 1000))}")
SessionOutput(
userId = userId,
startTimestampMs = state.get.startTimestampMs,
endTimestampMs = state.get.endTimestampMs,
durationMs = state.get.durationMs,
expired = false
)
}
}
val eventsQuery = sessionized
.writeStream
.queryName("events")
.outputMode("update")
.format("console")
.start()
input.addData(
"""{"event_time": "2018-01-01T00:00:00", "user_id": "mike"}""",
"""{"event_time": "2018-01-01T00:01:00", "user_id": "mike"}""",
"""{"event_time": "2018-01-01T00:05:00", "user_id": "mike"}"""
)
input.addData(
"""{"event_time": "2018-01-01T00:45:00", "user_id": "mike"}"""
)
eventsQuery.processAllAvailable()
}
case class SessionState(startTimestampMs: Long, endTimestampMs: Long) {
def durationMs: Long = endTimestampMs - startTimestampMs
}
case class SessionOutput(userId: String, startTimestampMs: Long, endTimestampMs: Long, durationMs: Long, expired: Boolean)
}
该程序的输出为:
root
|-- event_time: timestamp (nullable = true)
|-- user_id: string (nullable = true)
state update for user mike (current watermark: 1969-12-31 19:00:00.0)
User mike has new events (min: 2018-01-01 00:00:00.0, max: 2018-01-01 00:05:00.0).
User mike has no existing state.
User mike state updated. Timeout now set to 2018-01-01 00:35:00.0
-------------------------------------------
Batch: 0
-------------------------------------------
+------+----------------+--------------+----------+-------+
|userId|startTimestampMs|endTimestampMs|durationMs|expired|
+------+----------------+--------------+----------+-------+
| mike| 1514782800000| 1514783100000| 300000| false|
+------+----------------+--------------+----------+-------+
state update for user mike (current watermark: 2017-12-31 23:05:00.0)
User mike has new events (min: 2018-01-01 00:45:00.0, max: 2018-01-01 00:45:00.0).
User mike has existing state.
User mike state updated. Timeout now set to 2018-01-01 01:15:00.0
-------------------------------------------
Batch: 1
-------------------------------------------
+------+----------------+--------------+----------+-------+
|userId|startTimestampMs|endTimestampMs|durationMs|expired|
+------+----------------+--------------+----------+-------+
| mike| 1514782800000| 1514785500000| 2700000| false|
+------+----------------+--------------+----------+-------+
根据我的会话定义,第二批事件中的单个事件应该触发会话状态到期,从而触发新的会话。但是,由于水印(2017-12-31 23:05:00.0
)尚未通过状态超时(2018-01-01 00:35:00.0
),因此状态不会过期,并且尽管已有30分钟以上的时间已将事件错误地添加到现有会话中自上一批的最新时间戳以来已过去。
我认为,使会话状态失效的唯一方法是希望在批处理中收到足够多来自不同用户的事件,以使水印超过mike
的状态超时。
我想也可以弄乱流的水印,但我想不出如何完成用例。
这是准确的吗?在Spark中如何正确进行基于事件时间的会话化过程时,我是否缺少任何内容?
答案 0 :(得分:1)
如果水印间隔大于会话间隔持续时间,则您提供的实现似乎无效。
对于已经证明有效的逻辑,您需要将水印间隔设置为<30分钟。
如果您确实希望水印间隔独立于(或大于)会话间隔持续时间,则需要等到水印通过(水印+间隔)以使状态过期。合并逻辑似乎盲目地合并了窗口。在合并之前,应该考虑间隙持续时间。
答案 1 :(得分:-2)
编辑:我想我需要回答特定的起点问题,而不是提供完整的解决方案。
要添加Arun的答案,首先使用事件调用map / flatMapGroupsWithState的状态函数,然后使用超时状态调用该函数。根据其工作方式,您的代码将重置超时,而该批处理中的状态应超时。
因此,即使事件不包含此类键,您也可以利用超时功能来调用状态函数,但仍需要手动处理当前的水印。这就是为什么我将超时设置为最早会话的会话结束时间戳记,并在调用它后处理所有驱逐。
-
您可以参考下面的代码块,以了解如何通过flatMapGroupsWithState实现带有事件时间和水印的会话窗口。
注意:我没有清理代码,而是尝试同时支持两种输出模式,因此一旦确定了输出模式,就可以删除不相关的代码以使其更简单。
EDIT2:我对flatMapGroupsWithState有错误的假设,不能保证事件会被排序。