我已将其中一个Spark数据框列写入Avro格式的Kafka中。然后,我尝试从该主题读取数据,并将其从Avro转换为数据框列。数据的类型是时间戳,而不是数据库中的时间戳,而是一些默认值:
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
1970-01-01 00:00:00
其他数据类型的列(例如String)也可以注意到相同的行为。初始时间戳值看起来像这样,这是我想要获得的结果:
2019-03-19 12:26:03.003
2019-03-19 12:26:09
2019-03-19 12:27:04.003
2019-03-19 12:27:08.007
2019-03-19 12:28:01.013
2019-03-19 12:28:05.007
2019-03-19 12:28:09.023
2019-03-19 12:29:04.003
2019-03-19 12:29:07.047
2019-03-19 12:30:00.003
以下是转换为Avro后的相同数据:
00 F0 E1 9B BC B3 9C C2 05
00 80 E9 F7 C1 B3 9C C2 05
00 F0 86 B2 F6 B3 9C C2 05
00 B0 E9 9A FA B3 9C C2 05
00 90 A4 E1 AC B4 9C C2 05
00 B0 EA C8 B0 B4 9C C2 05
00 B0 88 B3 B4 B4 9C C2 05
00 F0 BE EA E8 B4 9C C2 05
00 B0 89 DE EB B4 9C C2 05
00 F0 B6 9E 9E B5 9C C2 05
该如何解决此转换问题?
用于将Avro写入Kafka,对其进行读取并将其转换回数据帧的代码。我尝试使用to_avro和from_avro Spark-avro方法:
import org.apache.spark.sql.avro._
val castDF = testDataDF.select(to_avro(testDataDF.col("update_database_time")) as 'value)
castDF
.write
.format("kafka")
.option("kafka.bootstrap.servers", bootstrapServers)
.option("topic", "app_state_test")
.save()
val cachedDf = spark
.read
.format("kafka")
.option("kafka.bootstrap.servers", bootstrapServers)
.option("subscribe", "app_state_test")
.load()
val jsonSchema = "{\"name\": \"update_database_time\", \"type\": \"long\", \"logicalType\": \"timestamp-millis\", \"default\": \"NONE\"}"
cachedDf.select(from_avro(cachedDf.col("value"), jsonSchema) as 'test)