如何从结构类型的数组创建结构?

时间:2019-07-24 21:58:00

标签: apache-spark-sql

如何合并数组中的所有结构并生成合并的结构?

例如,使用Spark SQL,我能够读取源json文件并生成一列数组类型,其中数组中的每个元素都包含一个key:value对。例如,假设我们拥有价值 列名为col,值:[{a: 1}, {b: 2}, {c: 3}]的ArrayType列

我需要将此数组类型列转换为值为{a: 1, b: 2, c: 3}的结构类型。

由于我是通过从JSON文件读取而获得架构的,因此可以使用派生的序数来获取结果,例如:

df.select(
  $"col.a".getItem(1) as "a", 
  $"col.b".getItem(2) as "b", 
  $"col.c".getItem(3) as "c")

此解决方案的问题在于,如果数组中的元素顺序不同,我将得到错误的结果。有没有一种干净的方法可以合并所有键/值结构并干净地生成一个结构?就我而言,键没有重复项,因此我不必担心键/值被覆盖时会丢失数据。

1 个答案:

答案 0 :(得分:0)

如果我对您的理解正确,那么可以结合使用explodepivot

scala> :paste
// Entering paste mode (ctrl-D to finish)

val df = Seq(
  (1, Array(("a", 1), ("b", 2), ("c", 3))),
  (2, Array(("b", 5), ("c", 6), ("a", 4)))
).toDF("id", "col")

df.show(10, false)

val explodedDF = df.withColumn("col2", explode(df.col("col"))).select("id", "col2")

explodedDF.show(10, false)

val flattenedDF = explodedDF.withColumn("k", $"col2._1").withColumn("v", $"col2._2").select("id", "k", "v")

flattenedDF.show(10, false)

val pivotedDF = flattenedDF.groupBy("id").pivot("k").agg(first(col("v")))

pivotedDF.show(10, false)

import scala.util.parsing.json.JSONObject

pivotedDF.select("a", "b", "c").collect().map{row => JSONObject(row.getValuesMap(row.schema.fieldNames))}.map(println)

// Exiting paste mode, now interpreting.

+---+------------------------+
|id |col                     |
+---+------------------------+
|1  |[[a, 1], [b, 2], [c, 3]]|
|2  |[[b, 5], [c, 6], [a, 4]]|
+---+------------------------+

+---+------+
|id |col2  |
+---+------+
|1  |[a, 1]|
|1  |[b, 2]|
|1  |[c, 3]|
|2  |[b, 5]|
|2  |[c, 6]|
|2  |[a, 4]|
+---+------+

+---+---+---+
|id |k  |v  |
+---+---+---+
|1  |a  |1  |
|1  |b  |2  |
|1  |c  |3  |
|2  |b  |5  |
|2  |c  |6  |
|2  |a  |4  |
+---+---+---+

+---+---+---+---+
|id |a  |b  |c  |
+---+---+---+---+
|1  |1  |2  |3  |
|2  |4  |5  |6  |
+---+---+---+---+

{"a" : 1, "b" : 2, "c" : 3}
{"a" : 4, "b" : 5, "c" : 6}
df: org.apache.spark.sql.DataFrame = [id: int, col: array<struct<_1:string,_2:int>>]
explodedDF: org.apache.spark.sql.DataFrame = [id: int, col2: struct<_1: string, _2: int>]
flattenedDF: org.apache.spark.sql.DataFrame = [id: int, k: string ... 1 more field]
pivotedDF: org.apache.spark.sql.DataFrame = [id: int, a: int ... 2 more fields]
import scala.util.parsing.json.JSONObject
res24: Array[Unit] = Array((), ())

scala>