使用Scala将RDD映射到Spark中的case(Schema)

时间:2016-08-31 09:45:31

标签: scala spark-dataframe

我是scala和spark的新手。我有一点问题。我有一个带有以下架构的RDD。

    RDD[((String, String), (Int, Timestamp, String, Int))]

我必须映射这个RDD来像这样转换它

   RDD[(Int, String, String, String, Timestamp, Int)]

我为此编写了以下代码

  map { case ((pid, name), (id, date, code, level)) => (id, name, code, pid, date, level) }

这项工作很好。现在我有另一个RDD

    RDD[((String, String), List[(Int, Timestamp, String, Int)])]

我希望像上面那样将其转换为

   RDD[(Int, String, String, String, Timestamp, Int)]

我怎么能这样做我已经尝试过这段代码但它不起作用

  map {
  case ((pid, name), List(id, date, code, level)) => (id, name, code, pid, date, level)
}

如何实现?

2 个答案:

答案 0 :(得分:1)

这是你正在寻找的东西吗?

val input: RDD[((String, String), List[(Int, Timestamp, String, Int)])] = ...
val output: RDD[(Int, String, String, String, Timestamp, Int)] = input.flatMap { case ((pid, name), list) =>
  list.map { case (id, date, code, level) =>
    (id, name, code, pid, date, level)
  }
}

或用于理解:

val output: RDD[(Int, String, String, String, Timestamp, Int)] = for {
  ((pid, name), list)     <- input
  (id, date, code, level) <- list
} yield (id, name, code, pid, date, level)

答案 1 :(得分:0)

尝试

{{1}}