Spark Dataframe在一对行上滑动窗口

时间:2016-06-08 12:52:23

标签: scala apache-spark dataframe apache-spark-sql event-log

我在csv中有一个包含三列timestampeventIduserId的事件日志。

我想要做的是在数据框中添加一个新列nextEventId

示例事件日志:

eventlog = sqlContext.createDataFrame(Array((20160101, 1, 0),(20160102,3,1),(20160201,4,1),(20160202, 2,0))).toDF("timestamp", "eventId", "userId")
eventlog.show(4)

|timestamp|eventId|userId|
+---------+-------+------+
| 20160101|      1|     0|
| 20160102|      3|     1|
| 20160201|      4|     1|
| 20160202|      2|     0|
+---------+-------+------+

期望的结果将是:

|timestamp|eventId|userId|nextEventId|
+---------+-------+------+-----------+
| 20160101|      1|     0|          2|
| 20160102|      3|     1|          4|
| 20160201|      4|     1|        Nil|
| 20160202|      2|     0|        Nil|
+---------+-------+------+-----------+

到目前为止,我一直在搞滑动窗户,但无法弄清楚如何比较2行......

val w = Window.partitionBy("userId").orderBy(asc("timestamp")) //should be a sliding window over 2 rows...
val nextNodes = second($"eventId").over(w) //should work if there are only 2 rows

1 个答案:

答案 0 :(得分:8)

您要找的是lead(或lag)。使用您已定义的窗口:

import org.apache.spark.sql.functions.lead

eventlog.withColumn("nextEventId", lead("eventId", 1).over(w))

对于真正的滑动窗口(如滑动平均值),您可以使用窗口定义的rowsBetweenrangeBetween子句,但这里并不需要它。然而,示例用法可能是这样的:

val w2 =  Window.partitionBy("userId")
  .orderBy(asc("timestamp"))
  .rowsBetween(-1, 0)

avg($"foo").over(w2)