java.lang.UnsupportedOperationException:编写时出现火花错误

时间:2018-11-05 06:37:05

标签: apache-spark apache-spark-dataset

当我尝试将数据集写入木地板文件时,出现错误

18/11/05 06:25:43 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 84 in stage 1.0 failed 4 times, most recent failure: Lost task 84.3 in stage 1.0 (TID 989, ip-10-253-194-207.nonprd.aws.csp.net, executor 4): java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
        at org.apache.parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
        at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

但是当我给出dataset.show()时,我便可以查看数据。不确定在哪里检查根本原因。

3 个答案:

答案 0 :(得分:0)

您是否已仔细检查任何日志中都没有OutOfMemory? 您是否有可能使用镶木地板不支持的数据类型?

请提供相应的源代码,显示:模式定义(案例类或 什么)+写?

答案 1 :(得分:0)

我遇到了同样的问题,在我的情况下,这是由于实木复合地板文件之间的架构差异所致:

给出该镶木地板目录,其中包含一些文件:

  • /user/user1/parquet_table/part-00000-1e73689f-69e5-471a-8510-1547d108fea3-c000.parquet
  • /user/user1/parquet_table/part-00000-276bf4c0-7214-4278-8131-53cd5339a50d-c000.parquet

当我尝试合并它们(spark2-shell)时:

val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-*.parquet")
val parquetFileDFCoal = parquetFileDF.coalesce(8)
parquetFileDFCoal.write.parquet("/tmp/testTemp/0001")

我遇到此异常:

[Stage 4:> (0 + 8) / 8]20/05/13 17:09:03 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 4.0 (TID 116, node.localhost.localdomain, executor 70): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
...
Caused by: java.lang.UnsupportedOperationException: parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
at parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)

如果使用spark2-shell检查每个文件,则可能会发现架构差异。在这里:

scala> val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-1e73689f-69e5-471a-8510-1547d108fea3-c000.parquet")
parquetFileDF: org.apache.spark.sql.DataFrame = [root_id: string, father_id: string ... 7 more fields]

scala> parquetFileDF.printSchema()
root
|-- root_id: string (nullable = true)
|-- father_id: string (nullable = true)
|-- self_id: string (nullable = true)
|-- group_name: string (nullable = true)
|-- father_name: string (nullable = true)
|-- cle: string (nullable = true)
|-- value: integer (nullable = true)


scala> val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-276bf4c0-7214-4278-8131-53cd5339a50d-c000.parquet ")
parquetFileDF: org.apache.spark.sql.DataFrame = [root_id: string, father_id: string ... 7 more fields]

scala> parquetFileDF.printSchema()
root
|-- root_id: string (nullable = true)
|-- father_id: string (nullable = true)
|-- self_id: string (nullable = true)
|-- group_name: string (nullable = true)
|-- father_name: string (nullable = true)
|-- cle: string (nullable = true)
|-- value: string (nullable = true)

您可以看到,有时value字段是Integer,有时是String。 要修复它,您必须转换其中一个文件以匹配类型。

答案 2 :(得分:0)

有一种更简单的方法可以检测镶木地板文件之间的架构差异,请使用选项mergeSchema,它将向您显示日志中不一致的字段

示例代码:

spark.read.option("mergeSchema", "True").parquet(fileList:_*) 

示例日志:

Caused by: org.apache.spark.SparkException: Failed to merge fields 'field1' and 'field1'. Failed to merge incompatible data types DoubleType and LongType