将整列数组合并到一个数组中

时间:2016-08-23 15:41:49

标签: scala apache-spark spark-dataframe

我有这个数据框,我想组合所有数组, 在数据列中,分成一个大数组,与DataFrame分开。

Scala和DataFrame API对我来说仍然很新,但我试了一下:

case class Tile(data: Array[Int])

val ta = Tile(Array(1,2))
val tb = Tile(Array(3,4))
val tc = Tile(Array(5,6))

df =  ListBuffer(ta,tb,tc).toDF()

// Combine contents of DF into one array
val result = new Array[Int](6)
var offset = 0
val combine = (t: WrappedArray[Int]) => {
    Array.copy(t, 0, result, offset, t.length)
    offset += t.length
}

df.foreach(r => combine(r(0).asInstanceOf[WrappedArray[Int]]))

df.show()
+------+
|  data|
+------+
|[1, 1]|
|[2, 2]|
|[3, 3]|
+------+

当我运行它时,我收到以下错误:

16/08/23 11:21:32 ERROR executor.Executor: Exception in task 0.0 in stage 17.0 (TID 17)
scala.MatchError: WrappedArray(1, 1) (of class scala.collection.mutable.WrappedArray$ofRef)
at scala.runtime.ScalaRunTime$.array_apply(ScalaRunTime.scala:71)
at scala.Array$.slowcopy(Array.scala:81)
at scala.Array$.copy(Array.scala:107)
at $line150.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:32)
at $line150.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:31)
at $line190.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:46)
at $line190.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:46)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:74

有人能指出我正确的方向吗?谢谢!

1 个答案:

答案 0 :(得分:1)

使用Spark时,不能像通常那样使用foreach来积累东西。由于spark在所有执行者之间分配工作,因此function需要Serializable

如果您仍希望以与通常类似的方式执行操作,请使用支持spark的分布式模型的Accumulator

val myRdd: RDD[List[Int]] = sc.parallelize(List(List(1,2), List(3,4), List(5,6))

val acc = sc.collectionAccumulator[Int]("MyAccumulator")

myRdd.foreach(l => l.foreach(i => acc.add(i)))

或者在你的情况下

case class Tile(data: Array[Int])

val myRdd: RDD[Tile] = sc.parallelize(List(
  Tile(Array(1,2)),
  Tile(Array(3,4)),
  Tile(Array(5,6))
))

val acc = sc.collectionAccumulator[Int]("MyAccumulator")

myRdd.foreach(t => t.data.foreach(i => acc.add(i)))