Spark DataFrame以键作为成员爆炸地图

时间:2017-05-26 03:10:57

标签: apache-spark exploded

我在databrick's blog找到了一张地图爆炸示例:

// input
{
  "a": {
    "b": 1,
    "c": 2
  }
}

Python: events.select(explode("a").alias("x", "y"))
 Scala: events.select(explode('a) as Seq("x", "y"))
   SQL: select explode(a) as (x, y) from events

// output
[{ "x": "b", "y": 1 }, { "x": "c", "y": 2 }]

但是,我无法看到这种方式导致我将地图更改为一个数字,其中的键被展平,然后展开:

// input
{
  "id": 0,
  "a": {
    "b": {"d": 1, "e": 2}
    "c": {"d": 3, "e": 4}
  }
}
// Schema
struct<id:bigint,a:map<string,struct<d:bigint,e:bigint>>>
root
 |-- id: long (nullable = true)
 |-- a: map (nullable = true)
 |    |-- key: string
 |    |-- value: struct (valueContainsNull = true)
 |    |    |-- d: long (nullable = true)
 |    |    |-- e: long (nullable = true)


// Imagined proces
Python: …
 Scala: events.select('id, explode('a) as Seq("x", "*")) //? "*" ?
   SQL: …

// Desired output
[{ "id": 0, "x": "b", "d": 1, "e": 2 }, { "id": 0, "x": "c", "d": 3, "e": 4 }]

是否有一些显而易见的方法可以将这样的输入用于制作如下表格:

id | x | d | e
---|---|---|---
 0 | b | 1 | 2
 0 | c | 3 | 4

1 个答案:

答案 0 :(得分:2)

虽然我不知道是否可以使用单个explode来爆炸地图,但是有一种方法可以使用UDF。诀窍是使用Row#schema.fields(i).name来获取“密钥”的名称

def mapStructs = udf((r: Row) => {
  r.schema.fields.map(f => (
    f.name,
    r.getAs[Row](f.name).getAs[Long]("d"),
    r.getAs[Row](f.name).getAs[Long]("e"))
  )
})

df
  .withColumn("udfResult", explode(mapStructs($"a")))
  .withColumn("x", $"udfResult._1")
  .withColumn("d", $"udfResult._2")
  .withColumn("e", $"udfResult._3")
  .drop($"udfResult")
  .drop($"a")
  .show

给出

+---+---+---+---+
| id|  x|  d|  e|
+---+---+---+---+
|  0|  b|  1|  2|
|  0|  c|  3|  4|
+---+---+---+---+