Pyspark-from_unixtime没有显示正确的日期时间

时间:2018-11-29 10:47:01

标签: apache-spark pyspark timestamp apache-spark-sql

我想将包含纪元时间的时间戳列转换为日期时间(人类可读)。 from_unixtime没有给我正确的日期和时间。请帮忙。

df = spark.createDataFrame([('1535934855077532656',), ('1535934855077532656',),('1535935539886503614',)], ['timestamp',])

df.show()
+-------------------+
|          timestamp|
+-------------------+
|1535934855077532656|
|1535934855077532656|
|1535935539886503614|
+-------------------+
df.withColumn('datetime',from_unixtime(df.timestamp,"yyyy-MM-dd HH:mm:ss:SSS")).select(['timestamp','datetime']).show(15,False)
+-------------------+----------------------------+
|timestamp          |datetime                    |
+-------------------+----------------------------+
|1535934855077532656|153853867-12-24 10:24:31:872|
|1535934855077532656|153853867-12-24 10:24:31:872|
|1535935539886503614|153875568-09-17 05:33:49:872|
+-------------------+----------------------------+

2 个答案:

答案 0 :(得分:0)

from_unix_time

  

将unix纪元(1970-01-01 00:00:00 UTC)的秒数转换为字符串   表示给定当前系统时区中该时刻的时间戳   格式。

您的数据显然没有用秒表示。也许纳秒?

 from pyspark.sql.functions import col, from_unixtime


df.withColumn(
    'datetime',
   from_unixtime(df.timestamp / 1000 ** 3,"yyyy-MM-dd HH:mm:ss:SSS")
).show(truncate=False)

# +-------------------+-----------------------+
# |timestamp          |datetime               |
# +-------------------+-----------------------+
# |1535934855077532656|2018-09-03 02:34:15:000|
# |1535934855077532656|2018-09-03 02:34:15:000|
# |1535935539886503614|2018-09-03 02:45:39:000|
# +-------------------+-----------------------+

答案 1 :(得分:0)

我尝试了这个,他们都回到了1970-01-01