spark.sql.Dataset.groupByKey是否支持groupBy之类的窗口操作?

时间:2017-11-07 11:45:04

标签: apache-spark spark-structured-streaming

在Spark Structured Streaming中,我们可以使用groupBy对事件时间进行窗口操作,如:

import spark.implicits._

val words = ... // streaming DataFrame of schema { timestamp: Timestamp, word: String }

// Group the data by window and word and compute the count of each group
val windowedCounts = words.groupBy(
  window($"timestamp", "10 minutes", "5 minutes"),
  $"word"
).count()

groupByKey是否也支持窗口操作?

感谢。

2 个答案:

答案 0 :(得分:1)

是和否。它不能直接使用,因为它仅适用于SQL / DataFrame API,但您始终可以使用窗口字段扩展记录:

val dfWithWindow = df.withColumn("window", window(...)))

case class Window(start: java.sql.Timestamp. end: java.sql.Timestamp)
case class MyRecordWithWindow(..., window: Window)

并将其用于分组:

dfWithWindow.as[MyRecordWithWindow].groupByKey(_.window).mapGroups(...)

答案 1 :(得分:0)

可以编写一个帮助程序函数,以便更轻松地生成时间窗口函数以提供给groupByKey

object windowing {
    import java.sql.Timestamp
    import java.time.Instant
    /** given:
     * a row type R
     * a function from R to the Timestamp
     * a windowing width in seconds
     * return: a function that allows groupByKey to do windowing
     */
    def windowBy[R](f:R=>Timestamp, width: Int) = {
        val w = width.toLong * 1000L
        (row: R) => {
            val tsCur = f(row)
            val msCur = tsCur.getTime()
            val msLB = (msCur / w) * w
            val instLB = Instant.ofEpochMilli(msLB)
            val instUB = Instant.ofEpochMilli(msLB+w)
            (Timestamp.from(instLB), Timestamp.from(instUB))
        }
    }
}

在您的示例中,它可能像这样使用:

case class MyRow(timestamp: Timestamp, word: String)

val windowBy60 = windowing.windowBy[MyRow](_.timestamp, 60)

// count words by time window
words.as[MyRow]
  .groupByKey(windowBy60)
  .count()

或按(窗口,单词)对计数:

words.as[MyRow]
  .groupByKey(row => (windowBy60(row), row.word))
  .count()