如何在RDD中压缩列表?

时间:2015-01-30 10:03:50

标签: scala apache-spark

是否可以在RDD中压缩列表?例如转换:

 val xxx: org.apache.spark.rdd.RDD[List[Foo]]

为:

 val yyy: org.apache.spark.rdd.RDD[Foo]

怎么做?

3 个答案:

答案 0 :(得分:15)

val rdd = sc.parallelize(Array(List(1,2,3), List(4,5,6), List(7,8,9), List(10, 11, 12)))
// org.apache.spark.rdd.RDD[List[Int]] = ParallelCollectionRDD ...

val rddi = rdd.flatMap(list => list)
// rddi: org.apache.spark.rdd.RDD[Int] = FlatMappedRDD ...

// which is same as rdd.flatMap(identity)
// identity is a method defined in Predef object.
//    def identity[A](x: A): A

rddi.collect()
// res2: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)

答案 1 :(得分:13)

你只需要展平它,但由于RDD上没有明确的“扁平”方法,你可以这样做:

rdd.flatMap(identity)

答案 2 :(得分:0)

您可以 pimp RDD类来附加.flatten方法(以便关注List api):

object SparkHelper {
  implicit class SeqRDDExtensions[T: ClassTag](val rdd: RDD[Seq[T]]) {
    def flatten: RDD[T] = rdd.flatMap(identity)
  }
}

然后可以简单地使用:

rdd.flatten