在Spark Scala中将时间戳转换为UTC

时间:2018-05-04 20:06:08

标签: scala apache-spark timestamp

我的环境是Spark 2.1,Scala

这可能很简单,但我是在打扰我。

My Dataframe,myDF就像吼叫 -

+--------------------+----------------+  
|     orign_timestamp | origin_timezone|  
+--------------------+----------------+  
|2018-05-03T14:56:...|America/St_Johns|  
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Toronto|    
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Halifax|  
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Toronto|  
+--------------------+----------------+   

我需要将orign_timestamp转换为UTC并将其添加为DF的新列。代码bellow工作正常。

myDF.withColumn("time_utc", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")),("America/Montreal"))).show

问题是我将时区固定为“America / Montreal”。我需要传递timeZone表单“orign_timeone”列。我试过了

myDF.withColumn("time_utc", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")), col("orign_timezone".toString.trim))).show

got Error:
<console>:34: error: type mismatch;
 found   : org.apache.spark.sql.Column
 required: String

我尝试过代码,没有通过异常,但新列与origin_time同时发生。

myDF.withColumn("origin_timestamp", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")), col("rign_timezone").toString)).show

2 个答案:

答案 0 :(得分:2)

每当遇到类似这样的问题时,您都可以使用expr

import org.apache.spark.sql.functions._

val df = Seq(
  ("2018-05-03T14:56:00", "America/St_Johns"), 
  ("2018-05-03T14:56:00", "America/Toronto"), 
  ("2018-05-03T14:56:00", "America/Halifax")
).toDF("origin_timestamp", "origin_timezone")

df.withColumn("time_utc",
  expr("to_utc_timestamp(origin_timestamp, origin_timezone)")
).show

// +-------------------+----------------+-------------------+
// |   origin_timestamp| origin_timezone|           time_utc|
// +-------------------+----------------+-------------------+
// |2018-05-03T14:56:00|America/St_Johns|2018-05-03 17:26:00|
// |2018-05-03T14:56:00| America/Toronto|2018-05-03 18:56:00|
// |2018-05-03T14:56:00| America/Halifax|2018-05-03 17:56:00|
// +-------------------+----------------+-------------------+

selectExpr

df.selectExpr(
  "*", "to_utc_timestamp(origin_timestamp, origin_timezone) as time_utc"
).show

// +-------------------+----------------+-------------------+
// |   origin_timestamp| origin_timezone|           time_utc|
// +-------------------+----------------+-------------------+
// |2018-05-03T14:56:00|America/St_Johns|2018-05-03 17:26:00|
// |2018-05-03T14:56:00| America/Toronto|2018-05-03 18:56:00|
// |2018-05-03T14:56:00| America/Halifax|2018-05-03 17:56:00|
// +-------------------+----------------+-------------------+

答案 1 :(得分:0)

如果升级到Spark 2.4,则可以使用the overload that accepts a Column for the timezone

或者,对于类型安全的函数访问,可以使用基础类:

new Column(
  ToUTCTimestamp(
    from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")).expr, 
    col("orign_timezone").expr
  )
)