Spark java.lang.ClassCastException:scala.collection.mutable.WrappedArray $ ofRef无法强制转换为java.util.ArrayList

时间:2016-11-23 12:45:28

标签: scala apache-spark apache-spark-sql spark-dataframe

在对WrappedArray

执行任何操作时抛出ClassCastExpection

实施例: 我有一个如下地图输出

输出:

Map(1 -> WrappedArray(Pan4), 2 -> WrappedArray(Pan15), 3 -> WrappedArray(Pan16, Pan17, Pan18), 4 -> WrappedArray(Pan19, Pan1, Pan2, Pan3, Pan4, Pan5, Pan6))]

当调用map.values时,其打印输出如下输出

MapLike(WrappedArray(Pan4), WrappedArray(Pan15), WrappedArray(Pan16, Pan17, Pan18), WrappedArray(Pan19, Pan1, Pan2, Pan3, Pan4, Pan5, Pan6))

如果调用map.values.map(arr => arr)map.values.forEach { value => println(value)}

,则抛出异常

我无法对包装数组执行任何操作。我只需要每个wrappedArray中存在的元素的大小

Error StackTrace
------------------
java.lang.ClassCastException: scala.collection.mutable.WrappedArray$ofRef cannot be cast to java.util.ArrayList
    at WindowTest$CustomMedian$$anonfun$1.apply(WindowTest.scala:176)
    at WindowTest$CustomMedian$$anonfun$1.apply(WindowTest.scala:176)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.immutable.Map$Map4.foreach(Map.scala:181)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at WindowTest$CustomMedian.evaluate(WindowTest.scala:176)
    at org.apache.spark.sql.execution.aggregate.ScalaUDAF.eval(udaf.scala:446)
    at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$35.apply(AggregationIterator.scala:376)
    at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$35.apply(AggregationIterator.scala:368)
    at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:154)
    at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29)
    at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

3 个答案:

答案 0 :(得分:11)

通过转换为Seq(序列类型)

解决了错误

早些时候:

val bufferMap: Map[Int, util.ArrayList[String]] = buffer.getAs[Map[Int, util.ArrayList[String]]](1)

修改:

val bufferMap: Map[Int, Seq[String]] = buffer.getAs[Map[Int, Seq[String]]](1)

答案 1 :(得分:1)

尝试下面的

map.values.**array**.forEach { value => println(value)}

数组是WrapperArray中的方法,它返回Array[T]。这里T是WrappedArray

中元素的类型

答案 2 :(得分:1)

对于那些使用Java Spark的人,将数据集编码为一个对象,而不是先使用Row然后使用getAs方法。

假设此数据集包含有关机器的一些随机信息:

+-----------+------------+------------+-----------+---------+--------------------+
|epoch      |     RValues|     SValues|    TValues|      ids|               codes|
+-----------+------------+------------+-----------+---------+--------------------+
| 1546297225| [-1.0, 5.0]|  [2.0, 6.0]| [3.0, 7.0]|   [2, 3]|[MRT0000020611, M...|
| 1546297226| [-1.0, 3.0]| [-6.0, 6.0]| [3.0, 4.0]|   [2, 3]|[MRT0000020611, M...|
| 1546297227| [-1.0, 4.0]|[-8.0, 10.0]| [3.0, 6.0]|   [2, 3]|[MRT0000020611, M...|
| 1546297228| [-1.0, 6.0]|[-8.0, 11.0]| [3.0, 5.0]|   [2, 3]|[MRT0000020611, M...|
+-----------+------------+------------+-----------+---------+--------------------+

不是让Dataset<Row>创建符合此数据集列定义的Dataset<MachineLog>而是创建MachineLog类。进行转换时,请使用.as(Encoders.bean(MachineLog.class))方法来定义编码器。

例如:

spark.createDataset(dataset.rdd(), Encoders.bean(MachineLog.class));

但是不建议从Dataset转换为RDD。尝试使用as方法。

Dataset<MachineLog> mLog = spark.read().parquet("...").as(Encoders.bean(MachineLog.class));

它也可以在转换后使用。

Dataset<MachineLog> machineLogDataset = aDataset
                .join(
                        otherDataset,
                        functions.col("...").eqNullSafe("...")
                        )
                ).as(Encoders.bean(MachineLog.class));

考虑到MachineLog类必须遵守序列化规则(即具有空显式构造函数,getter和setter)