重新分配(1)是否始终维持秩序?

时间:2017-05-19 14:29:18

标签: scala apache-spark rdd

我需要压缩两个可能有或没有相同分区的rdds,因此需要重新分区方法。我需要在压缩时保持秩序,我知道一般的重新分区洗牌。但是下面的代码显示repartiton(1)并没有改组rdd。是这次还是我们每次都可以保证吗?

重新分区(1)是否类似于.collect,因为它们都将rdd带到一个节点上?

scala> var k = sc.parallelize((1 to 100),4)
k: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:27

scala> k.repartition(2)
res0: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[4] at repartition at <console>:30

scala> res0.collect
res1: Array[Int] = Array(1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 77, 79, 81, 83, 85, 87, 89, 91, 93, 95, 97, 99)


scala> var l = sc.parallelize((1 to 100),4)
l: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[11] at parallelize at <console>:27

scala> l.repartition(1)
res5: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[15] at repartition at <console>:30

scala> .collect
res6: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100)

1 个答案:

答案 0 :(得分:1)

当您repartition为较低的值(并且1是尽可能最低的分区数)时,您实际上正在执行coalesce方法的工作。

repartition方法的docstring(和实现)将比我能给出的任何回复更清晰:

/**
 * Return a new RDD that has exactly numPartitions partitions.
 *
 * Can increase or decrease the level of parallelism in this RDD. Internally, this uses
 * a shuffle to redistribute data.
 *
 * If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
 * which can avoid performing a shuffle.
 */
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
  coalesce(numPartitions, shuffle = true)
}

但是,如果您计划zip,请考虑压缩无论如何都要洗牌。如果确实希望控制分区,则需要手动重新分区(如果您有PairRDD,可能使用自定义分区程序),然后使用zipPartitions指定你想保留分区。

但是,在大多数情况下,您可能只想坚持zip默认实现,如下所示:

/**
 * Zips this RDD with another one, returning key-value pairs with the first element in each RDD,
 * second element in each RDD, etc. Assumes that the two RDDs have the *same number of
 * partitions* and the *same number of elements in each partition* (e.g. one was made through
 * a map on the other).
 */
def zip[U: ClassTag](other: RDD[U]): RDD[(T, U)] = withScope {
  zipPartitions(other, preservesPartitioning = false) { (thisIter, otherIter) =>
    new Iterator[(T, U)] {
      def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {
        case (true, true) => true
        case (false, false) => false
        case _ => throw new SparkException("Can only zip RDDs with " +
          "same number of elements in each partition")
      }
      def next(): (T, U) = (thisIter.next(), otherIter.next())
    }
  }
}

正如您所看到的,zip已经完全符合您的要求。