用于S3路径的RDD的自定义Spark分区程序

时间:2016-03-21 01:31:49

标签: scala amazon-s3 apache-spark rdd partitioner

我有一个RDD[(Long, String)]个S3路径(存储桶+密钥)及其大小。我想以这样的方式对其进行分区,即每个分区获得大小总计大约相同值的路径。这样,当我读取这些路径的内容时,每个分区应该具有大致相同的数据量。我为此编写了这个自定义分区程序。

import org.apache.spark.Partitioner
import scala.collection.mutable.PriorityQueue

class S3Partitioner(partitions: Int, val totalSize: Long) extends Partitioner {
  require(partitions >= 0, s"Number of partitions ($partitions) cannot be negative.")
  require(totalSize >= 0, s"Number of totalSize ($totalSize) cannot be negative.")

  val pq = PriorityQueue[(Int, Long)]()
  (0 until partitions).foreach { partition =>
    pq.enqueue((partition, totalSize / partitions))
  }

  def getPartition(key: Any): Int = key match {
    case k: Long =>
      val (partition, capacityLeft) = pq.dequeue
      pq.enqueue((partition, capacityLeft - k))
      partition
    case _ => 0
  }

  def numPartitions: Int = partitions

  override def equals(other: Any): Boolean = other match {
    case p: S3Partitioner =>
      p.totalSize == totalSize && p.numPartitions == numPartitions
    case _ => false
  }

  override def hashCode: Int = {
    (972 * numPartitions.hashCode) ^ (792 * totalSize.hashCode)
  }
}

如果分区器被提供给RDD并且密钥(大小)按降序排序,则该分区器应该表现最佳。当我尝试使用它时,我开始在之前正在运行的代码中收到此错误:

Cause: java.io.NotSerializableException: scala.collection.mutable.PriorityQueue$ResizableArrayAccess

这就是我使用它的方式:

val pathsWithSize: RDD[(Long, String)] = ...
val totalSize = pathsWithSize.map(_._1).reduce(_ + _)

new PairRDDFunctions(pathsWithSize)
  .partitionBy(new S3Partitioner(5 * sc.defaultParallelism, totalSize))
  .mapPartitions { iter =>
    iter.map { case (_, path) => readS3(path) }
  }

我不知道如何解决这个问题。非常感谢任何帮助。

0 个答案:

没有答案