我有一个Breeze Vectors的RDD,想要计算它们的平均值。我的第一种方法是使用aggregate
:
import org.apache.spark.{ SparkConf, SparkContext }
import org.apache.spark.rdd.RDD
import org.scalatest.{ BeforeAndAfterAll, FunSuite, Matchers, Suite }
import org.scalatest.prop.GeneratorDrivenPropertyChecks
import breeze.linalg.{ Vector => BreezeVector }
class CalculateMean extends FunSuite with Matchers with GeneratorDrivenPropertyChecks with SparkSpec {
test("Calculate mean") {
type U = (BreezeVector[Double], Int)
type T = BreezeVector[Double]
val rdd: RDD[T] = sc.parallelize(List(1.0, 2, 3, 4, 5, 6).map { x => BreezeVector(x, x * x) }, 2)
val zeroValue = (BreezeVector.zeros[Double](2), 0)
val seqOp = (agg: U, x: T) => (agg._1 + x, agg._2 + 1)
val combOp = (xs: U, ys: U) => (xs._1 + ys._1, xs._2 + ys._2)
val mean = rdd.aggregate(zeroValue)(seqOp, combOp)
println(mean._1 / mean._2.toDouble)
}
}
/**
* Setup and tear down spark context
*/
trait SparkSpec extends BeforeAndAfterAll {
this: Suite =>
private val master = "local[2]"
private val appName = this.getClass.getSimpleName
private var _sc: SparkContext = _
def sc: org.apache.spark.SparkContext = _sc
val conf: SparkConf = new SparkConf()
.setMaster(master)
.setAppName(appName)
override def beforeAll(): Unit = {
super.beforeAll()
_sc = new SparkContext(conf)
}
override def afterAll(): Unit = {
if (_sc != null) {
_sc.stop()
_sc = null
}
super.afterAll()
}
}
然而,此算法可能是数值不稳定的(请参阅https://stackoverflow.com/a/1346890/1037094)。
如何在Spark中为Breeze Vectors实现Knuths algorithm,并且rdd.aggregate
是推荐的方法吗?
答案 0 :(得分:1)
如果Knuth描述的算法是正确的选择,如何在Spark中为Breeze Vectors实现Knuths算法,并且推荐使用rdd.aggregate这个方法吗?
aggregate
可能是一个很好的方法。不幸的是,它没有,或者至少没有经过一些调整。它本质上是顺序流传输算法和它应用的功能不是关联的。让我们假设您有一个函数knuth_mean
。应该很清楚(忽略计数和单个元素的情况):
(knuth_mean (knuth_mean (knuth_mean 1 2) 3) 4)
与
不一样(knuth_mean (knuth_mean 1 2) (knuth_mean 3 4))
但是,您仍然可以使用Knuth算法来获得每个分区的平均值:
def partMean(n: Int)(iter: Iterator[BreezeVector[Double]]) = {
val partialMean = iter.foldLeft((BreezeVector.zeros[Double](n), 0.0))(
(acc: (BreezeVector[Double], Double), v: BreezeVector[Double]) =>
(acc._1 + (v - acc._1) / (acc._2 + 1.0), acc._2 + 1.0))
Iterator(partialMean)
}
val means = rdd.mapPartitions(partMean(lengthOfVector))
问题仍然是如何聚合这个部分结果。直接应用Knuth算法需要展开分区,它几乎完全胜过使用Spark的目的。您可以StatCounter.merge
方法查看Spark内部如何处理它。