从long转换为Timestamp以插入数据库

时间:2019-02-07 11:00:39

标签: apache-spark apache-spark-sql

目标: 从时间戳类型为long类型的JSON文件中读取数据,并将其插入具有时间戳类型的表中。问题是我不知道如何将长型转换为插入的时间戳类型。

输入文件示例:

    {"sensor_id":"sensor1","reading_time":1549533263587,"notes":"My Notes for 
    Sensor1","temperature":24.11,"humidity":42.90}

我想读这篇文章,从中创建一个Bean,然后插入表中。这是我的Bean定义:

public class DummyBean {

    private String sensor_id;
    private String notes;
    private Timestamp reading_time;
    private double temperature;
    private double humidity;

这是我要插入的表:

    create table dummy (
    id serial not null primary key,
    sensor_id   varchar(40),
    notes   varchar(40),
    reading_time    timestamp with time zone default (current_timestamp at time zone 'UTC'),
    temperature   decimal(15,2),
    humidity      decimal(15,2)
    );

这是我的Spark应用程序,用于读取JSON文件并进行插入(附加)

SparkSession spark = SparkSession
                .builder()
                .appName("SparkJDBC2")
                .getOrCreate();

        // Java Bean used to apply schema to JSON Data
        Encoder<DummyBean> dummyEncoder = Encoders.bean(DummyBean.class);

        // Read JSON file to DataSet
        String jsonPath = "input/dummy.json";
        Dataset<DummyBean> readings = spark.read().json(jsonPath).as(dummyEncoder);

        // Diagnostics and Sink
        readings.printSchema();
        readings.show();


        // Write to JDBC Sink
        String url = "jdbc:postgresql://dbhost:5432/mydb";
        String table = "dummy";
        Properties connectionProperties = new Properties();
        connectionProperties.setProperty("user", "foo");
        connectionProperties.setProperty("password", "bar");
        readings.write().mode(SaveMode.Append).jdbc(url, table, connectionProperties);

输出和错误消息:

root
 |-- humidity: double (nullable = true)
 |-- notes: string (nullable = true)
 |-- reading_time: long (nullable = true)
 |-- sensor_id: string (nullable = true)
 |-- temperature: double (nullable = true)

+--------+--------------------+-------------+---------+-----------+
|humidity|               notes| reading_time|sensor_id|temperature|
+--------+--------------------+-------------+---------+-----------+
|    42.9|My Notes for Sensor1|1549533263587|  sensor1|      24.11|
+--------+--------------------+-------------+---------+-----------+

Exception in thread "main" org.apache.spark.sql.AnalysisException: Column "reading_time" not found in schema Some(StructType(StructField(id,IntegerType,false), StructField(sensor_id,StringType,true), StructField(notes,StringType,true), StructField(temperature,DecimalType(15,2),true), StructField(humidity,DecimalType(15,2),true)));
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$4$$anonfun$6.apply(JdbcUtils.scala:147)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$4$$anonfun$6.apply(JdbcUtils.scala:147)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$4.apply(JdbcUtils.scala:146)

3 个答案:

答案 0 :(得分:0)

您的帖子中的异常显示找不到“ reading_time”列。因此,请交叉检查表是否在db端具有所需的列。此外,时间戳记以毫秒为单位,因此在应用to_timestamp()函数之前,您需要将其除以1000,否则您将得到一个奇怪的日期。

我可以在下面复制并转换读数时间。

scala> val readings = Seq((42.9,"My Notes for Sensor1",1549533263587L,"sensor1",24.11)).toDF("humidity","notes","reading_time","sensor_id","temperature")
readings: org.apache.spark.sql.DataFrame = [humidity: double, notes: string ... 3 more fields]

scala> readings.printSchema();
root
 |-- humidity: double (nullable = false)
 |-- notes: string (nullable = true)
 |-- reading_time: long (nullable = false)
 |-- sensor_id: string (nullable = true)
 |-- temperature: double (nullable = false)


scala> readings.show(false)
+--------+--------------------+-------------+---------+-----------+
|humidity|notes               |reading_time |sensor_id|temperature|
+--------+--------------------+-------------+---------+-----------+
|42.9    |My Notes for Sensor1|1549533263587|sensor1  |24.11      |
+--------+--------------------+-------------+---------+-----------+


scala>  readings.withColumn("ts", to_timestamp('reading_time/1000)).show(false)
+--------+--------------------+-------------+---------+-----------+-----------------------+
|humidity|notes               |reading_time |sensor_id|temperature|ts                     |
+--------+--------------------+-------------+---------+-----------+-----------------------+
|42.9    |My Notes for Sensor1|1549533263587|sensor1  |24.11      |2019-02-07 04:54:23.587|
+--------+--------------------+-------------+---------+-----------+-----------------------+


scala>

答案 1 :(得分:0)

感谢您的帮助。是的,该表缺少该列,因此我将其修复。 这是解决它的方法(Java版本)

import static org.apache.spark.sql.functions.col;
import static org.apache.spark.sql.functions.to_timestamp;

...
Dataset<Row>  readingsRow = readings.withColumn("reading_time", to_timestamp(col("reading_time").$div(1000L)));

// Write to JDBC Sink
String url = "jdbc:postgresql://dbhost:5432/mydb";
String table = "dummy";
Properties connectionProperties = new Properties();
connectionProperties.setProperty("user", "foo");
connectionProperties.setProperty("password", "bar");
readingsRow.write().mode(SaveMode.Append).jdbc(url, table, connectionProperties);

答案 2 :(得分:-1)

如果您的日期是字符串,则可以使用

String readtime = obj.getString("reading_time");
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ"); //Z for time zone
Date reading_time = sdf.parse(readtime);

或使用

new Date(json.getLong(milliseconds))

如果很长