使用Scala将字符串转换为Spark的时间戳

时间:2016-05-20 14:33:58

标签: scala apache-spark apache-spark-sql timestamp

我有一个名为train的数据框,他有以下架构:

root
|-- date_time: string (nullable = true)
|-- site_name: integer (nullable = true)
|-- posa_continent: integer (nullable = true)

我想将date_time列投射到timestamp并创建一个新列,其year列中提取的date_time值。

要清楚,我有以下数据框:

+-------------------+---------+--------------+
|          date_time|site_name|posa_continent|
+-------------------+---------+--------------+
|2014-08-11 07:46:59|        2|             3|
|2014-08-11 08:22:12|        2|             3|
|2015-08-11 08:24:33|        2|             3|
|2016-08-09 18:05:16|        2|             3|
|2011-08-09 18:08:18|        2|             3|
|2009-08-09 18:13:12|        2|             3|
|2014-07-16 09:42:23|        2|             3|
+-------------------+---------+--------------+

我想获得以下数据框:

+-------------------+---------+--------------+--------+
|          date_time|site_name|posa_continent|year    |
+-------------------+---------+--------------+--------+
|2014-08-11 07:46:59|        2|             3|2014    |
|2014-08-11 08:22:12|        2|             3|2014    |
|2015-08-11 08:24:33|        2|             3|2015    |
|2016-08-09 18:05:16|        2|             3|2016    |
|2011-08-09 18:08:18|        2|             3|2011    |
|2009-08-09 18:13:12|        2|             3|2009    |
|2014-07-16 09:42:23|        2|             3|2014    |
+-------------------+---------+--------------+--------+

2 个答案:

答案 0 :(得分:12)

好吧,如果你想将date_timecolumn转换为timestamp并使用年份值创建一个新列,那么就这样做:

import org.apache.spark.sql.functions.year

df
  .withColumn("date_time", $"date_time".cast("timestamp"))  // cast to timestamp
  .withColumn("year", year($"date_time"))  // add year column

答案 1 :(得分:1)

您可以映射数据框以在每行末尾添加年份:

df.map {
  case Row(col1: String, col2: Int, col3: Int) => (col1, col2, col3, DateTime.parse(col1, DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")).getYear)
}.toDF("date_time", "site_name", "posa_continent", "year").show()