如何理解spark sql窗口函数的结果

时间:2017-11-14 03:17:00

标签: apache-spark apache-spark-sql

我有以下代码来尝试spark sql窗口函数:

test("spark sql time window 2") {
    val spark = SparkSession.builder().master("local").appName("SparkSQLWindowTest").getOrCreate()
    import spark.implicits._
    import org.apache.spark.sql.functions._
    val ds = Seq(
      SaleRecord("2017-10-11 09:01:12", 1),
      SaleRecord("2017-10-11 09:01:18", 6),
      SaleRecord("2017-10-11 10:11:12", 2),
      SaleRecord("2017-10-11 10:18:13", 5),
      SaleRecord("2017-10-11 10:22:13", 3),
      SaleRecord("2017-10-11 10:22:22", 6),
      SaleRecord("2017-10-11 10:34:56", 2),
      SaleRecord("2017-10-11 10:48:22", 6),
      SaleRecord("2017-10-11 11:52:23", 4),
      SaleRecord("2017-10-11 12:56:24", 2)).toDS

    val ds2 = ds.groupBy(window($"Time", "20 minutes", "9 minutes")).agg(sum("revenue")).orderBy("window.start")
    ds2.show(truncate = false)

    /*
+---------------------------------------------+------------+
|window                                       |sum(revenue)|
+---------------------------------------------+------------+
|[2017-10-11 08:45:00.0,2017-10-11 09:05:00.0]|7.0         |
|[2017-10-11 08:54:00.0,2017-10-11 09:14:00.0]|7.0         |
|[2017-10-11 09:57:00.0,2017-10-11 10:17:00.0]|2.0         |
|[2017-10-11 10:06:00.0,2017-10-11 10:26:00.0]|16.0        |
|[2017-10-11 10:15:00.0,2017-10-11 10:35:00.0]|16.0        |
|[2017-10-11 10:24:00.0,2017-10-11 10:44:00.0]|2.0         |
|[2017-10-11 10:33:00.0,2017-10-11 10:53:00.0]|8.0         |
|[2017-10-11 10:42:00.0,2017-10-11 11:02:00.0]|6.0         |
|[2017-10-11 11:36:00.0,2017-10-11 11:56:00.0]|4.0         |
|[2017-10-11 11:45:00.0,2017-10-11 12:05:00.0]|4.0         |
|[2017-10-11 12:39:00.0,2017-10-11 12:59:00.0]|2.0         |
|[2017-10-11 12:48:00.0,2017-10-11 13:08:00.0]|2.0         |
+---------------------------------------------+------------+


     */
  }

SaleRecord的定义是一个简单的案例类:

case class SaleRecord(time: String, revenue: Double)

我无法理解结果中如何生成前三行?

为什么第一个窗口是 [2017-10-11 08:45:00.0,2017-10-11 09:05:00.0]

2 个答案:

答案 0 :(得分:1)

window(timeColumn, windowDuration, slideDuration=None, startTime=None)

首先,window函数将创建一个时间模板,表示:

zero = 1970-01-01 00:00:00 UTC

[zero + startTime + slideDuration * n, zero + startTime + slideDuration * n + windowDuration)

例如:

window('ts', '5 seconds', '3 seconds', '2 seconds')
# is equal to :
['1970-01-01 00:00:02', '1970-01-01 00:00:07'),
['1970-01-01 00:00:05', '1970-01-01 00:00:10'),
['1970-01-01 00:00:08', '1970-01-01 00:00:13'),
['1970-01-01 00:00:11', '1970-01-01 00:00:16'),
...

然后,根据 timeColumn ,您的DataFrame中的每一行都将“落入”时间模板。 一行可以属于很多时间的模板单元。

最后,删除所有空时模板单元并执行agg

答案 1 :(得分:0)

使用Windows功能,Spark将从unix纪元时间1970-01-01 00:00:00 UTC计算帧数。 由于您将silde持续时间设置为9 minutes,因此第一帧包含Time列的值为[2017-10-11 08:45:00.0,2017-10-11 09:05:00.0]

为清楚起见

$ date --date="2017-10-11 08:45:00" +"%s
1507686300
$ echo $[1507686300%180]
0