按键激活和不减少行

时间:2016-10-06 11:29:16

标签: scala apache-spark

我有一个RDD,结构如下:

(lang, id, name, max, min)

我想添加另一列total,其中包含每个唯一max的列min的最大值和列lang的最小值的减法(不减少行数)。所以我会得到像

这样的东西
rdd:
+----+--+----+---+---+
|lang|id|name|max|min|
+----+--+----+---+---+
|  en|  |    |  5|  1|
|  en|  |    |  2|  0|
|  de|  |    |  9|  2|
|  en|  |    |  7|  1|
|  nl|  |    |  3|  0|
|  nl|  |    |  5|  1|
+----+--+----+---+---+

rdd:
+----+--+----+---+---+-----+
|lang|id|name|max|min|total|
+----+--+----+---+---+-----+
|  en|  |    |  5|  1|    7|
|  en|  |    |  2|  0|    7|
|  de|  |    |  9|  2|    7|
|  en|  |    |  7|  1|    7|
|  nl|  |    |  3|  0|    5|
|  nl|  |    |  5|  1|    5|
+----+--+----+---+---+-----+

出于兼容性原因,我希望使用DataFrames / Spark SQL实现不带

非常感谢任何建议!

2 个答案:

答案 0 :(得分:1)

您可以聚合:

val rdd = sc.parallelize(Seq(
  ("en", "id1", "name1", 5,  1), ("en", "id2", "name2", 2,  0), 
  ("de", "id3", "name3", 9,  2), ("en", "id4", "name4", 7,  1),
  ("nl", "id5", "name5", 3,  0), ("nl", "id6", "name6", 5,  1)
))

val totals = rdd.keyBy(_._1).aggregateByKey((Long.MinValue, Long.MaxValue))(
  { case ((maxA, minA), (_, _, _, maxX, minX)) => 
    (Math.max(maxA, maxX), Math.min(minA, minX)) }, 
  { case ((maxA1, minA1), (maxA2, minA2)) => 
    (Math.max(maxA1, maxA2), Math.min(minA1, minA2))}
).mapValues { case (max, min) => max - min }

加入原始数据:

val vals = rdd.keyBy(_._1).join(totals).values

并展平(使用Shapeless):

import shapeless.syntax.std.tuple._

val result = vals.map { case (x, y) => x :+ y }

result.toDF.show

输出:

+---+---+-----+---+---+---+ 
| _1| _2|   _3| _4| _5| _6|
+---+---+-----+---+---+---+
| en|id1|name1|  5|  1|  7|
| en|id2|name2|  2|  0|  7|
| en|id4|name4|  7|  1|  7|
| de|id3|name3|  9|  2|  7|
| nl|id5|name5|  3|  0|  5|
| nl|id6|name6|  5|  1|  5|
+---+---+-----+---+---+---+

但是对于复杂的聚合,这变得乏味,低效,并且难以快速管理。

答案 1 :(得分:1)

您必须在RDD上执行两项操作

<强> 1.Reducebykey

<强> 2.Join

 val rdd = originalRDD.rdd.map(row => 
 (row(0), (row(1).toString.toLong, row(2).toString.toLong))
 )

应用 reducebyKey 并获取每个lang的最小值和最大值

val filterRDD = jsonRdd.reduceByKey(minMax).map(row => (row._1, (row._2._1-row._2._2)))

  def minMax(a: Tuple2[Long, Long], b: Tuple2[Long, Long]):Tuple2[Long,Long] = {
  val min = if (a._1 < b._1) a._1 else b._1
  val max = if (a._2 > b._2) a._2 else b._2
  (min, max)
  }

应用加入条件

 rdd.join(filterRDD).map(row => (row._1, row._2._1._1, row._2._1._2, row._2._2))