在scala& spark中从List创建RDD

时间:2016-03-08 00:36:44

标签: scala apache-spark rdd

Orgin数据

ID, NAME, SEQ, NUMBER
A, John, 1, 3
A, Bob, 2, 5
A, Sam, 3, 1
B, Kim, 1, 4
B, John, 2, 3
B, Ria, 3, 5

要制作ID群组列表,我在下面做了

val MapRDD = originDF.map { x => (x.getAs[String](colMap.ID), List(x)) }
val ListRDD = MapRDD.reduceByKey { (a: List[Row], b: List[Row]) => List(a, b).flatten }

我的目标是制作这个RDD(目的是在每个ID组中找到SEQ-1' s NAME和Number diff)

ID, NAME, SEQ, NUMBER, PRE_NAME, DIFF
A, John, 1, 3, NULL, NULL
A, Bob, 2, 5, John, 2
A, Sam, 3, 1, Bob, -4
B, Kim, 1, 4, NULL, NULL
B, John, 2, 3, Kim, -1
B, Ria, 3, 5, John, 2

目前ListRDD就像

A, ([A,Jone,1,3], [A,Bob,2,5], ..)
B, ([B,Kim,1,4], [B,John,2,3], ..)

这是我试图用ListRDD制作我的目标RDD的代码(不能按我的意愿工作)

  def myFunction(ListRDD: RDD[(String, List[Row])]) = {
    var rows: List[Row] = Nil
    ListRDD.foreach( row => { 
        rows ::: make(row._2)
    })
    //rows has nothing and It's not RDD
  }

  def make( eachList: List[Row]): List[Row] = {
      caseList.foreach { x => //... Make PRE_NAME and DIFF in new List
  }

我的最终目标是将此RDD保存在csv(RDD.saveAsFile ...)中。如何使用此数据制作此RDD(非列表)。

1 个答案:

答案 0 :(得分:1)

窗口功能看起来非常合适:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.lag

val df = sc.parallelize(Seq(
    ("A", "John", 1, 3),
    ("A", "Bob", 2, 5),
    ("A", "Sam", 3, 1),
    ("B", "Kim", 1, 4),
    ("B", "John", 2, 3),
    ("B", "Ria", 3, 5))).toDF("ID", "NAME", "SEQ", "NUMBER")

val w = Window.partitionBy($"ID").orderBy($"SEQ")

df.select($"*",
  lag($"NAME", 1).over(w).alias("PREV_NAME"),
  ($"NUMBER" - lag($"NUMBER", 1).over(w)).alias("DIFF"))