unix_timestamp()可以在Apache Spark中以毫秒为单位返回unix时间吗?

时间:2017-02-14 23:10:57

标签: apache-spark timestamp unix-timestamp

我试图从时间戳字段获取unix时间(以毫秒为单位)(13位数),但目前它以秒为单位返回(10位数)。

scala> var df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.123", "2017-01-18 11:00:00.882", "2017-01-18 11:00:02.432").toDF()
df: org.apache.spark.sql.DataFrame = [value: string]

scala> df = df.selectExpr("value timeString", "cast(value as timestamp) time")
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp]


scala> df = df.withColumn("unix_time", unix_timestamp(df("time")))
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp ... 1 more field]

scala> df.take(4)
res63: Array[org.apache.spark.sql.Row] = Array(
[2017-01-18 11:00:00.000,2017-01-18 11:00:00.0,1484758800], 
[2017-01-18 11:00:00.123,2017-01-18 11:00:00.123,1484758800], 
[2017-01-18 11:00:00.882,2017-01-18 11:00:00.882,1484758800], 
[2017-01-18 11:00:02.432,2017-01-18 11:00:02.432,1484758802])

即使2017-01-18 11:00:00.1232017-01-18 11:00:00.000不同,我也会获得相同的unix时间1484758800

我错过了什么?

5 个答案:

答案 0 :(得分:2)

unix_timestamp()以秒为单位返回unix时间戳。

时间戳中的最后3位数字与毫秒字符串(1.999sec = 1999 milliseconds)的后3位数相同,因此只需取时间戳字符串的最后3位数并附加到毫秒字符串的末尾

答案 1 :(得分:1)

实施Dao Thi's answer

中建议的方法
import pyspark.sql.functions as F
df = spark.createDataFrame([('22-Jul-2018 04:21:18.792 UTC', ),('23-Jul-2018 04:21:25.888 UTC',)], ['TIME'])
df.show(2,False)
df.printSchema()

输出:

+----------------------------+
|TIME                        |
+----------------------------+
|22-Jul-2018 04:21:18.792 UTC|
|23-Jul-2018 04:21:25.888 UTC|
+----------------------------+
root
|-- TIME: string (nullable = true)

字符串时间格式(包括毫秒)转换为 unix_timestamp(double)。使用子字符串方法(start_position = -7,length_of_substring = 3)从字符串中提取毫秒,并分别向unix_timestamp添加毫秒。 (投射到子字符串以使其浮动以进行添加)

df1 = df.withColumn("unix_timestamp",F.unix_timestamp(df.TIME,'dd-MMM-yyyy HH:mm:ss.SSS z') + F.substring(df.TIME,-7,3).cast('float')/1000)

在Spark中将 unix_timestamp(double)转换为 timestamp数据类型

df2 = df1.withColumn("TimestampType",F.to_timestamp(df1["unix_timestamp"]))
df2.show(n=2,truncate=False)

这将为您提供以下输出

+----------------------------+----------------+-----------------------+
|TIME                        |unix_timestamp  |TimestampType          |
+----------------------------+----------------+-----------------------+
|22-Jul-2018 04:21:18.792 UTC|1.532233278792E9|2018-07-22 04:21:18.792|
|23-Jul-2018 04:21:25.888 UTC|1.532319685888E9|2018-07-23 04:21:25.888|
+----------------------------+----------------+-----------------------+

检查架构:

df2.printSchema()


root
 |-- TIME: string (nullable = true)
 |-- unix_timestamp: double (nullable = true)
 |-- TimestampType: timestamp (nullable = true)

答案 2 :(得分:1)

在 Spark 3.0.1 版本之前,无法使用 SQL 内置函数 unix_timestamp 将时间戳转换为以毫秒为单位的 Unix 时间。

根据 Spark 的 DateTimeUtils 上的代码

<块引用>

“时间戳在外部公开为 java.sql.Timestamp,内部存储为 longs,能够以微秒精度存储时间戳。”

因此,如果您定义一个以 java.sql.Timestamp 作为输入的 UDF,您可以以毫秒为单位调用 getTime 以获取 Long。如果您应用 unix_timestamp,您将只能获得精确到秒的 unix 时间。

val tsConversionToLongUdf = udf((ts: java.sql.Timestamp) => ts.getTime)

将此应用于各种时间戳:

val df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.111", "2017-01-18 11:00:00.110", "2017-01-18 11:00:00.100")
  .toDF("timestampString")
  .withColumn("timestamp", to_timestamp(col("timestampString")))
  .withColumn("timestampConversionToLong", tsConversionToLongUdf(col("timestamp")))
  .withColumn("timestampUnixTimestamp", unix_timestamp(col("timestamp")))

df.printSchema()
df.show(false)

// returns
root
 |-- timestampString: string (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampConversionToLong: long (nullable = false)
 |-- timestampCastAsLong: long (nullable = true)

+-----------------------+-----------------------+-------------------------+-------------------+
|timestampString        |timestamp              |timestampConversionToLong|timestampUnixTimestamp|
+-----------------------+-----------------------+-------------------------+-------------------+
|2017-01-18 11:00:00.000|2017-01-18 11:00:00    |1484733600000            |1484733600         |
|2017-01-18 11:00:00.111|2017-01-18 11:00:00.111|1484733600111            |1484733600         |
|2017-01-18 11:00:00.110|2017-01-18 11:00:00.11 |1484733600110            |1484733600         |
|2017-01-18 11:00:00.100|2017-01-18 11:00:00.1  |1484733600100            |1484733600         |
+-----------------------+-----------------------+-------------------------+-------------------+

答案 3 :(得分:1)

unix_timestamp() 无法实现,但从 Spark 3.1.0 开始,有一个名为 unix_millis() 的内置函数:

<块引用>

unix_millis(timestamp) - 返回自 1970-01-01 00:00:00 UTC 以来的毫秒数。截断更高级别的精度。

答案 4 :(得分:0)

毫秒部分的时间戳格式隐藏

尝试一下:

df = df.withColumn("time_in_milliseconds", col("time").cast("double"))

您会得到类似1484758800.792的信息,其中792毫秒是一个

至少对我有用(Scala,Spark和Hive)