我有一个镶木地板文件夹,其中包含一些要使用结构化流读取和处理的分区数据集。在我的代码中,我有以下内容:
val someFile = sparkSession.readStream.option("checkpointLocation", "checkpoint")
.schema(schemaString.asInstanceOf[StructType])
.format("parquet")
.load(inputProperties("input.path"))
.drop(col("SOMECOL"))
.filter($"SOMEOTHERCOL" isNotNull)
运行以上命令时,出现以下错误:
org.apache.spark.sql.execution.QueryExecutionException: Encounter error while reading parquet files. One possible cause: Parquet column cannot be converted in the corresponding files. Details:
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:198)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 1 in block 0 in file file:/path/to/test/data/PART_BY_DATE=20180914/part-00000-dc8f7897-7530-4f65-b184-bae85e3bc2d6.snappy.parquet
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:223)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:215)
at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:186)
... 13 more
Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Integer
at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
但是,当我在不指定模式的情况下打开模式推论运行以上命令时,一切运行良好:
val someFile = sparkSession.readStream.option("checkpointLocation", "checkpoint")
//.schema(schemaString.asInstanceOf[StructType])
.option("spark.sql.streaming.schemaInference","true")
.format("parquet")
.load(inputProperties("input.path"))
.drop(col("SOMECOL"))
.filter($"SOMEOTHERCOL" isNotNull)
从上面看来,问题似乎出在模式上。但是令我感到困惑的是,该模式是使用Shell从相同的拼花文件夹中提取的:
scala> println(spark.read.parquet("/path/to/test/data/").schema.prettyJson)
,然后使用以下方法将其提供回流阅读器:
.schema(schemaString.asInstanceOf[StructType])
使用Shell可以很好地加载文件并显示架构。
这是否意味着Spark在读取木地板文件时没有针对架构类型验证数据,结构化流的工作方式是否有所不同?我在想什么?