在Spark的窗口函数中添加新列

时间:2020-08-12 17:01:40

标签: scala apache-spark pyspark

将小时分为15分钟,每15分钟的时间范围和相应的总和添加新列。

我在这里使用了窗口功能:How to group by time interval in Spark SQL, 有人可以帮助您如何添加hour_part列或除窗口函数以外的任何方法。

输入:

id,datetime,quantity
1234,2018-01-01 12:00:21,10
1234,2018-01-01 12:01:02,20
1234,2018-01-01 12:10:23,10
1234,2018-01-01 12:20:19,25
1234,2018-01-01 12:25:20,25
1234,2018-01-01 12:28:00,25
1234,2018-01-01 12:47:25,10
1234,2018-01-01 12:58:00,40

输出:

id,date,hour_part,sum
1234,2018-01-01,1,40
1234,2018-01-01,2,75
1234,2018-01-01,3,0
1234,2018-01-01,4,50

1 个答案:

答案 0 :(得分:0)

下面的代码可能对您的小时添加有所帮助,但是AFAIK窗口功能可以有效解决运行聚合的问题。

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

val df=Seq(("1234","2018-01-01 12:00:21",10),
("1234","2018-01-01 12:01:02",20),
("1234","2018-01-01 12:10:23",10),
("1234","2018-01-01 12:20:19",25),
("1234","2018-01-01 12:25:20",25),
("1234","2018-01-01 12:28:00",25),
("1234","2018-01-01 12:47:25",10),
("1234","2018-01-01 12:58:00",40)).toDF("id","datetime","quantity")

val windowSpec  = Window.partitionBy(lit("A")).orderBy(lit("A"))

df.groupBy($"id", window($"datetime", "15 minutes")).sum("quantity").orderBy("window")
.withColumn("hour_part",row_number.over(windowSpec))
.withColumn("date",to_date($"window.end")).withColumn("sum",$"sum(quantity)")
.drop($"window").drop($"sum(quantity)").show()

/*
+----+---------+----------+---+
|  id|hour_part|      date|sum|
+----+---------+----------+---+
|1234|        1|2018-01-01| 40|
|1234|        2|2018-01-01| 75|
|1234|        3|2018-01-01| 50|
+----+---------+----------+---+
*/