输入DF:
main_id sub_id time
1 . 11 . 12:00
1 . 12 1:00
1 . 12 . 3:00
1 . 12 . 5:00
1 . 13 . 7:00
1 . 13 . 8:00
2 . 21 . 12:00
2 . 21 5:00
我试图找到与main_id单独相关的运行时间戳差异
输出DF:
main_id sub_id . time diff
1 . 11 . 12:00 null
1. 12 . 1:00 . 1
1 . 12 . 3:00 . 2
1 . 12 . 5:00 . 2
1 . 13 . 7:00 . 2
1 . 13 . 8:00 . 1
2 . 21 . 12:00 . null
2 . 21 . 5:00 . 5
Code Tried:
val needed_window = Window.partitionBy($"main_id").orderBy($"main_id")
val diff_time = diff($"time").over(partitionWindow)
df.select($"*", diff_time as "time_diff").show
我在diff函数中遇到错误,有没有办法实现它。请提出任何建议。
答案 0 :(得分:1)
假设您的time
列的类型为Timestamp
,您可以使用time
以及{{3}来计算当前行与上一行之间的unix_timestamp
差异窗函数。
import java.sql.Timestamp
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val df = Seq(
(1, 11, Timestamp.valueOf("2018-06-01 12:00:00")),
(1, 12, Timestamp.valueOf("2018-06-01 13:00:00")),
(1, 12, Timestamp.valueOf("2018-06-01 15:00:00")),
(1, 12, Timestamp.valueOf("2018-06-01 17:00:00")),
(1, 13, Timestamp.valueOf("2018-06-01 19:00:00")),
(1, 13, Timestamp.valueOf("2018-06-01 20:00:00")),
(2, 21, Timestamp.valueOf("2018-06-01 12:00:00")),
(2, 21, Timestamp.valueOf("2018-06-01 17:00:00"))
).toDF("main_id", "sub_id", "time")
val window = Window.partitionBy($"main_id").orderBy($"main_id")
df.withColumn("diff",
(unix_timestamp($"time") - unix_timestamp(lag($"time", 1).over(window))) / 3600.0
).show
// +-------+------+-------------------+----+
// |main_id|sub_id| time|diff|
// +-------+------+-------------------+----+
// | 1| 11|2018-06-01 12:00:00|null|
// | 1| 12|2018-06-01 13:00:00| 1.0|
// | 1| 12|2018-06-01 15:00:00| 2.0|
// | 1| 12|2018-06-01 17:00:00| 2.0|
// | 1| 13|2018-06-01 19:00:00| 2.0|
// | 1| 13|2018-06-01 20:00:00| 1.0|
// | 2| 21|2018-06-01 12:00:00|null|
// | 2| 21|2018-06-01 17:00:00| 5.0|
// +-------+------+-------------------+----+