Spark / Scala:仅使用RDD使用ReduceByKey创建嵌套结构

时间:2017-08-29 11:15:39

标签: scala apache-spark rdd

Spark / Scala:仅使用RDD使用ReduceByKey创建嵌套结构

我想仅使用RDD创建嵌套结构。我能够使用groupBy函数执行此操作,该函数对于大数据表现不佳。所以我想用reduceByKey来做,但我无法得到我想要的东西。任何帮助将不胜感激。

输入数据:

val sales=sc.parallelize(List(
  ("West",  "Apple",  2.0, 10),
  ("West",  "Apple",  3.0, 15),
  ("West",  "Orange", 5.0, 15),
  ("South", "Orange", 3.0, 9),
  ("South", "Orange", 6.0, 18),
  ("East",  "Milk",   5.0, 5)))

必需输出是结构列表。我可以使用groupByKey执行此操作,如下所示:

sales.map(value => (value._1 ,(value._2,value._3,value._4  )) )
  .groupBy(_._1)
  .map { case(k,v) => (k, v.map(_._2)) }
  .collect()
  .foreach(println)

// (South,List((Orange,3.0,9), (Orange,6.0,18)))
// (East,List((Milk,5.0,5)))
// (West,List((Apple,2.0,10), (Apple,3.0,15), (Orange,5.0,15)))

但我希望使用reduceByKey实现同样的目标。我无法获得List [Struct]。相反,我可以得到List [List]。有没有办法获得List [Struct]?

sales.map(value => (value._1 ,List(value._2,value._3,value._4)))
  .reduceByKey((a,b) => (a ++ b))
  .collect()
  .foreach(println)

// (South,List(Orange, 3.0, 9, Orange, 6.0, 18))
// (East,List(Milk, 5.0, 5))
// (West,List(Apple, 2.0, 10, Apple, 3.0, 15, Orange, 5.0, 15))

sales.map(value => (value._1 ,List(value._2,value._3,value._4)))
  .reduceByKey((a,b) =>(List(a) ++ List(b)))
  .collect()
  .foreach(println)

// (South,List(List(Orange, 3.0, 9), List(Orange, 6.0, 18)))
// (East,List(Milk, 5.0, 5))
// (West,List(List(List(Apple, 2.0, 10), List(Apple, 3.0, 15)), List(Orange, 5.0, 15)))

1 个答案:

答案 0 :(得分:3)