如何使用mapWithState提取超时会话

时间:2016-11-24 12:51:57

标签: scala apache-spark spark-streaming

我正在更新我的代码以从updateStateByKey切换到mapWithState,以便根据超时2分钟获取用户的会话(2仅用于测试目的)。每个会话应在超时之前聚合会话中的所有流数据(JSON字符串)。

这是我的旧代码:

val membersSessions = stream.map[(String, (Long, Long, List[String]))](eventRecord => {
  val parsed = Utils.parseJSON(eventRecord)
  val member_id = parsed.getOrElse("member_id", "")
  val timestamp = parsed.getOrElse("timestamp", "").toLong
  //The timestamp is returned twice because the first one will be used as the start time and the second one as the end time
  (member_id, (timestamp, timestamp, List(eventRecord)))
})

val latestSessionInfo = membersSessions.map[(String, (Long, Long, Long, List[String]))](a => {
  //transform to (member_id, (time, time, counter, events within session))
  (a._1, (a._2._1, a._2._2, 1, a._2._3))
}).
  reduceByKey((a, b) => {
    //transform to (member_id, (lowestStartTime, MaxFinishTime, sumOfCounter, events within session))
    (Math.min(a._1, b._1), Math.max(a._2, b._2), a._3 + b._3, a._4 ++ b._4)
  }).updateStateByKey(Utils.updateState)

updateStateByKey的问题得到了很好的解释here。我决定使用mapWithState的一个主要原因是updateStateByKey无法返回已完成的会话(已超时的会话)以供进一步处理。

这是我第一次尝试将旧代码转换为新版本:

val spec = StateSpec.function(updateState _).timeout(Minutes(1))
val latestSessionInfo = membersSessions.map[(String, (Long, Long, Long, List[String]))](a => {
  //transform to (member_id, (time, time, counter, events within session))
  (a._1, (a._2._1, a._2._2, 1, a._2._3))
})
val userSessionSnapshots = latestSessionInfo.mapWithState(spec).snapshotStream()

我稍微误解了updateState的内容,因为据我所知,超时不应该手动计算(以前在我的函数Utils.updateState中完成)和{{ 1}}应该返回超时会话。

1 个答案:

答案 0 :(得分:2)

假设您始终等待超时2分钟,您可以使mapWithState流仅在触发超时时输出数据。

这对您的代码意味着什么?这意味着您现在需要监视超时而不是在每次迭代中输出元组。我想你的mapWithState看起来会像以下那样:

def updateState(key: String,
                value: Option[(Long, Long, Long, List[String])],
                state: State[(Long, Long, Long, List[String])]): Option[(Long, Long, Long, List[String])] = {
  def reduce(first: (Long, Long, Long, List[String]), second: (Long, Long, Long, List[String])) = {
    (Math.min(first._1, second._1), Math.max(first._2, second._2), first._3 + second._3, first._4 ++ second._4)
  }

  value match {
    case Some(currentValue) =>
      val result = state
        .getOption()
        .map(currentState => reduce(currentState, currentValue))
        .getOrElse(currentValue)
      state.update(result)
      None
    case _ if state.isTimingOut() => state.getOption()
  }
}

这样,如果状态超时,你只会在流外部输出一些内容,否则你会在状态内聚合它。

这意味着您的Spark DStream图可以过滤掉所有未定义的值,并且只保留以下值:

latestSessionInfo
 .mapWithState(spec)
 .filter(_.isDefined)

filter之后,您将只有超时的州。