使用键作为新列重塑键值对的火花数据帧

时间:2016-09-01 08:49:27

标签: scala apache-spark spark-dataframe

我是新来的火花和斯卡拉。假设我有一个列表的数据框是键值对。有没有办法将列ID的id变量映射为新列?

df.show()
+--------------------+--------------------  +
| ids                | vals                 |
+--------------------+--------------------  +
|[id1,id2,id3]       | null                 |
|[id2,id5,id6]       |[WrappedArray(0,2,4)] |
|[id2,id4,id7]       |[WrappedArray(6,8,10)]|

预期产出:

+----+----+
|id1 | id2| ...
+----+----+
|null| 0  | ...
|null| 6  | ...

1 个答案:

答案 0 :(得分:3)

一种可能的方法是计算新DataFrame的列,并使用这些列来构造行。

import org.apache.spark.sql.functions._

val data = List((Seq("id1","id2","id3"),None),(Seq("id2","id4","id5"),Some(Seq(2,4,5))),(Seq("id3","id5","id6"),Some(Seq(3,5,6))))

val df = sparkContext.parallelize(data).toDF("ids","values")

val values = df.flatMap{
  case Row(t1:Seq[String], t2:Seq[Int]) => Some((t1 zip t2).toMap)
  case Row(_, null) => None
}

// get the unique names of the columns across the original data
val ids = df.select(explode($"ids")).distinct.collect.map(_.getString(0))

// map the values to the new columns (to Some value or None)
val transposed = values.map(entry => Row.fromSeq(ids.map(id => entry.get(id))))

// programmatically recreate the target schema with the columns we found in the data
import org.apache.spark.sql.types._
val schema = StructType(ids.map(id => StructField(id, IntegerType, nullable=true)))

// Create the new DataFrame
val transposedDf = sqlContext.createDataFrame(transposed, schema)

此过程将传递数据2次,但根据后备数据源,计算列名可能相当便宜。

此外,这在DataFramesRDD之间来回传递。我有兴趣看到一个“纯粹的”DataFrame进程。