配对RDD转换

时间:2016-05-15 22:47:39

标签: scala apache-spark

如果我有类似的数据集:

val list = List ( (1,1), (1,2), (1,3), (2,2), (2,1), (3,1), (3,3) )

我想找到平均每个键,所以输出应该是:

(1, 2), (2, 3/2), (3, 2)我可以使用groupByKey, countByKey, and reduceByKey以某种方式执行此操作,还是必须使用类似于以下示例的combineByKey方法:我尝试使用groupByKey, countByKey, and reduceByKey但这种方法组合不起作用我想知道是否有人知道使用这三种方法做到这一点的方法吗?

val result = input.combineByKey(
(v) => (v, 1),
(acc: (Int, Int), v) => (acc._1 + v, acc._2 + 1),
(acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2)).  
map{ case (key, value) => (key, value._1 / value._2.toFloat) } 

result.collectAsMap().map(println(_))

3 个答案:

答案 0 :(得分:4)

您应该尝试以下方法:

val sc: SparkContext = ...
val input = sc.parallelize(List((1,1), (1,2), (1,3), (2,2), (2,1), (3,1), (3,3)))
val averages = input.groupByKey.map { case (key, values) =>
  (key, values.sum / values.size.toDouble)
}

println(averages.collect().toList) // List((1,2.0), (2,1.5), (3,2.0))

答案 1 :(得分:1)

您可以简单地使用PairRDDFunctions.groupByKey并计算您想要的内容。

val avgKey = input.groupByKey.map{
  case (k, v) => (k, v.sum.toDouble/v.size)
}
avgkey.collect
//res2: Array[(Int, Double)] = Array((3,2.0), (1,2.0), (2,1.5))

答案 2 :(得分:1)

使用reduceByKey,先将 duples 转换为三元组

rdd.map{ case(k,v) => (k,(v,1)) }.
    reduceByKey( (a,v) => (a._1+v._1, a._2+v._2)).
    map {case (k,v) => (k, v._1 / v._2)}