我尝试使用“ com.databricks.spark.avro”格式在数据砖中加载数据类型为avro的数据。它已经成功了,但是当我尝试查看数据时
df_data.head(5)
我遇到了错误:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 57 in stage 253.0 failed 4 times, most recent failure: Lost task 57.3 in stage 253.0 (TID 173637, 192.166.221.76, executor 8): com.databricks.spark.avro.SchemaConverters$IncompatibleSchemaException: Cannot convert Avro schema to catalyst type because schema at path new_id is not compatible (avroType = STRING, sqlType = LongType)
我想也许是因为数据是字符串,我需要更改它,但事实并非如此。 new_id = long
为什么会发生这种情况以及如何解决?