我有一个Spark应用程序,该应用程序可加载CSV文件,将其转换为Parquet文件,将Parquet文件存储在Data Lake存储中,然后将数据加载到BigQuery表中。
问题是,当CSV的旧时间戳值过多时,会发生转换,但不能在BigQuery表中显示timestamp列。
当我将配置spark.sql.parquet.outputTimestampType
设置为TIMESTAMP_MICROS
时,我在BigQuery上收到此错误:
Cannot return an invalid timestamp value of -62135607600000000 microseconds relative to the Unix epoch. The range of valid timestamp values is [0001-01-1 00:00:00, 9999-12-31 23:59:59.999999]; error in writing field reference_date
当我将配置spark.sql.parquet.outputTimestampType
设置为TIMESTAMP_MILLIS
时,我在Airflow上收到此错误:
Error while reading data, error message: Invalid timestamp value -62135607600000 for field 'reference_date' of type 'INT64' (logical type 'TIMESTAMP_MILLIS'): generic::out_of_range: Invalid timestamp value: -62135607600000
id,reference_date
"6829baef-bcd9-412a-a2f3-abdfed02jsd","0001-01-02 21:00:00"
reference_date
转换为“时间戳记”列):def castDFColumn(
df: DataFrame,
column: String,
dataType: DataType
): DataFrame = df.withColumn(column, df(column).cast(dataType))
...
var df = spark
.read
.format("csv")
.option("header", true)
.load("myfile.csv")
df = castDFColumn(df, "reference_date", TimestampType)
df
.write
.mode("overwrite")
.parquet("path/to/save")
val conf = new SparkConf().setAppName("Load CSV")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
conf.set("spark.sql.parquet.outputTimestampType", "TIMESTAMP_MILLIS/TIMESTAMP_MICROS")
conf.set("spark.sql.session.timeZone", "UTC")
似乎时间戳已更改为0000-12-31 21:00:00
,或类似的东西,超出了INT64
时间戳的可接受范围。
有人经历过这个吗?