我在数据帧中有一列类型为Timestamp
的列,格式为yyyy-MM-dd HH:mm:ss
。
该列按时间排序,其中较早的日期位于较早的行
当我运行此命令
List<Row> timeRows = df.withColumn(ts, df.col(ts).cast("long")).select(ts).collectAsList();
我遇到一个奇怪的问题,即较晚日期的值小于较早日期的值。示例:
[670] : 1550967304 (2019-02-23 04:30:15)
[671] : 1420064100 (2019-02-24 08:15:04)
这是转换为大纪元的正确方法还是其他方法?
答案 0 :(得分:1)
尝试使用unix_timestamp
将字符串日期时间转换为时间戳。根据文档:
unix_timestamp(Column s,String p)用给定的时间转换时间字符串 模式(请参见 [http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html ])转换为Unix时间戳(以秒为单位),如果失败,则返回null。
import org.apache.spark.functions._
val format = "yyyy-MM-dd HH:mm:ss"
df.withColumn("epoch_sec", unix_timestamp($"ts", format)).select("epoch_sec").collectAsList()
另外,请参见https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-functions-datetime.html
答案 1 :(得分:0)
您应该在org.apache.spark.sql.functions中使用内置函数unix_timestamp()
https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/functions.html#unix_timestamp()
答案 2 :(得分:0)
我认为您正在考虑使用:unix_timestamp()
您可以从哪个导入:
import static org.apache.spark.sql.functions.unix_timestamp;
并像这样使用:
df = df.withColumn(
"epoch",
unix_timestamp(col("date")));
这是一个完整的示例,我试图模仿您的用例:
package net.jgp.books.spark.ch12.lab990_others;
import static org.apache.spark.sql.functions.col;
import static org.apache.spark.sql.functions.from_unixtime;
import static org.apache.spark.sql.functions.unix_timestamp;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
/**
* Use of from_unixtime() and unix_timestamp().
*
* @author jgp
*/
public class EpochTimestampConversionApp {
/**
* main() is your entry point to the application.
*
* @param args
*/
public static void main(String[] args) {
EpochTimestampConversionApp app = new EpochTimestampConversionApp();
app.start();
}
/**
* The processing code.
*/
private void start() {
// Creates a session on a local master
SparkSession spark = SparkSession.builder()
.appName("expr()")
.master("local")
.getOrCreate();
StructType schema = DataTypes.createStructType(new StructField[] {
DataTypes.createStructField(
"event",
DataTypes.IntegerType,
false),
DataTypes.createStructField(
"original_ts",
DataTypes.StringType,
false) });
// Building a df with a sequence of chronological timestamps
List<Row> rows = new ArrayList<>();
long now = System.currentTimeMillis() / 1000;
for (int i = 0; i < 1000; i++) {
rows.add(RowFactory.create(i, String.valueOf(now)));
now += new Random().nextInt(3) + 1;
}
Dataset<Row> df = spark.createDataFrame(rows, schema);
df.show();
df.printSchema();
// Turning the timestamps to Timestamp datatype
df = df.withColumn(
"date",
from_unixtime(col("original_ts")).cast(DataTypes.TimestampType));
df.show();
df.printSchema();
// Turning back the timestamps to epoch
df = df.withColumn(
"epoch",
unix_timestamp(col("date")));
df.show();
df.printSchema();
// Collecting the result and printing out
List<Row> timeRows = df.collectAsList();
for (Row r : timeRows) {
System.out.printf("[%d] : %s (%s)\n",
r.getInt(0),
r.getAs("epoch"),
r.getAs("date"));
}
}
}
输出应为:
...
[994] : 1551997326 (2019-03-07 14:22:06)
[995] : 1551997329 (2019-03-07 14:22:09)
[996] : 1551997330 (2019-03-07 14:22:10)
[997] : 1551997332 (2019-03-07 14:22:12)
[998] : 1551997333 (2019-03-07 14:22:13)
[999] : 1551997335 (2019-03-07 14:22:15)
希望这会有所帮助。