Spark Scala-计算动态时间戳记间隔

时间:2019-03-08 19:51:35

标签: scala apache-spark dataframe

具有一个带有称为“ maxTmstmp”的时间戳列(时间戳类型)和另一个带有小时的列的数据帧,用表示为“ WindowHours”的整数表示。 我想动态减去时间戳记和整数列以获得较低的时间戳记

我的数据和预期效果(“ minTmstmp”列):

+-----------+-------------------+-------------------+
|WindowHours|          maxTmstmp|          minTmstmp|
|           |                   |(maxTmstmp - Hours)|
+-----------+-------------------+-------------------+
|          1|2016-01-01 23:00:00|2016-01-01 22:00:00|
|          2|2016-03-01 12:00:00|2016-03-01 10:00:00|
|          8|2016-03-05 20:00:00|2016-03-05 12:00:00|
|         24|2016-04-12 11:00:00|2016-04-11 11:00:00|
+-----------+-------------------+-------------------+

 root
     |-- WindowHours: integer (nullable = true)
     |-- maxTmstmp: timestamp (nullable = true)

我已经找到一个带有小时间隔的表达式,但是它不是动态的。以下代码无法正常工作。

standards.
      .withColumn("minTmstmp", $"maxTmstmp" - expr("INTERVAL 10 HOURS"))
      .show()

在Spark 2.4和Scala上运行。

1 个答案:

答案 0 :(得分:2)

一种简单的方法是将maxTmstmp转换为unix time,从中减去WindowHours的秒数,然后将结果转换回Spark Timestamp,如下所示如下所示:

import java.sql.Timestamp
import org.apache.spark.sql.functions._
import spark.implicits._

val df = Seq(
  (1, Timestamp.valueOf("2016-01-01 23:00:00")),
  (2, Timestamp.valueOf("2016-03-01 12:00:00")),
  (8, Timestamp.valueOf("2016-03-05 20:00:00")),
  (24, Timestamp.valueOf("2016-04-12 11:00:00"))
).toDF("WindowHours", "maxTmstmp")

df.withColumn("minTmstmp",
    from_unixtime(unix_timestamp($"maxTmstmp") - ($"WindowHours" * 3600))
  ).show
// +-----------+-------------------+-------------------+
// |WindowHours|          maxTmstmp|          minTmstmp|
// +-----------+-------------------+-------------------+
// |          1|2016-01-01 23:00:00|2016-01-01 22:00:00|
// |          2|2016-03-01 12:00:00|2016-03-01 10:00:00|
// |          8|2016-03-05 20:00:00|2016-03-05 12:00:00|
// |         24|2016-04-12 11:00:00|2016-04-11 11:00:00|
// +-----------+-------------------+-------------------+