Spark Dataframe Window基于多列的滞后函数

时间:2017-01-23 16:53:59

标签: apache-spark apache-spark-sql spark-dataframe window-functions

val df = sc.parallelize(Seq((201601, 100.5),
  (201602, 120.6),
  (201603, 450.2),
  (201604, 200.7),
  (201605, 121.4))).toDF("date", "volume")

val w = org.apache.spark.sql.expressions.Window.orderBy("date")    
val leadDf = df.withColumn("new_col", lag("volume", 1, 0).over(w))
leadDf.show()

+------+------+-------+
|  date|volume|new_col|
+------+------+-------+
|201601| 100.5|    0.0|
|201602| 120.6|  100.5|
|201603| 450.2|  120.6|
|201604| 200.7|  450.2|
|201605| 121.4|  200.7|
+------+------+-------+

这很好。

但如果我还有一个列如下所示。

val df = sc.parallelize(Seq((201601, ter1, 10.1),
  (201601, ter2, 10.6),
  (201602, ter1, 10.7),
  (201603, ter3, 10.8),
  (201603, ter4, 10.8),
  (201603, ter3, 10.8),
  (201604, ter4, 10.9))).toDF("date", "territory", "volume")

我的要求是针对同一地区,我想找到上个月的数量(如果存在),如果不存在,只需指定一个值0.0

1 个答案:

答案 0 :(得分:1)

如果我理解正确,您希望相同地区的上一个日期的值。

如果是这样,那么只需添加partitionBy即重新定义窗口规范,如下所示:

val w = org.apache.spark.sql.expressions.Window.partitionBy("territory").orderBy("date")