我正在学习Spark及其与RDD分区分布相关的并行性。我有一台4 CPU机器,因此我有4个并行单元。返回分区索引的成员" 0"我无法在不强制RDD使用localIterator的情况下找到返回此分区的方法。
我过去常常表现得非常简洁。是否有更简洁的方法来按分区过滤RDD?以下两种方法有效,但看起来很笨拙。
scala> val data = 1 to 20
data: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
scala> val distData = sc.parallelize(data)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[75] at parallelize at <console>:26
scala> distData.mapPartitionsWithIndex{
(index,it) => {
it.toList.map(x => if (index == 0) (x)).iterator
}
}.toLocalIterator.toList.filterNot(
_.isInstanceOf[Unit]
)
res107: List[AnyVal] = List(1, 2, 3, 4, 5)
scala> distData.mapPartitionsWithIndex{
(index,it) => {
it.toList.map(x => if (index == 0) (x)).iterator
}
}.toLocalIterator.toList.filter(
_ match{
case x: Unit => false
case x => true
}
)
res108: List[AnyVal] = List(1, 2, 3, 4, 5)
答案 0 :(得分:1)
distData.mapPartitionsWithIndex{ (index, it) =>
if (index == 0) it else Array[Int]().iterator
}
你可以返回一个空的迭代器,它会正常工作。