使用Spark / Scala写入Kudu时遇到错误

时间:2018-11-06 14:39:42

标签: apache-spark apache-kudu

我正在尝试从Spark将数据写入Kudu,但出现此错误

   java.lang.UnsupportedOperationException at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.decodeDictionaryIds(VectorizedColumnReader.java:296) at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:174) at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:230) at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:137) at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105)

代码示例:

df.write.options(Map("kudu.master"-> "ip:7051",
                     "kudu.table"-> "test_kudu")).mode("append").kudu

使用的库:

org.apache.kudu:kudu-spark2_2.11:1.8.0  
org.apache.kudu:kudu-client:1.8.0 

谢谢!

0 个答案:

没有答案