Spark Scala - 将结构数组拆分为数据帧列

时间:2021-05-07 02:59:36

标签: json scala apache-spark

我有一个嵌套的源 json 文件,其中包含一个结构数组。结构的数量因行而异,我想使用 Spark (scala) 从结构的键/值动态创建新的数据帧列,其中键是列名,值是列值。< /p>

示例缩小的 json 记录

{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}

数据帧架构

scala> val df = spark.read.json("file:///tmp/nested_test.json")
root
 |-- key1: struct (nullable = true)
 |    |-- key2: struct (nullable = true)
 |    |    |-- key3: string (nullable = true)
 |    |    |-- key4: string (nullable = true)
 |    |    |-- key5: struct (nullable = true)
 |    |    |    |-- key6: string (nullable = true)
 |    |    |    |-- key7: string (nullable = true)
 |    |    |    |-- values: array (nullable = true)
 |    |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |    |-- name: string (nullable = true)
 |    |    |    |    |    |-- value: string (nullable = true)

目前做了什么

df.select(
    ($"key1.key2.key3").as("key3"),
    ($"key1.key2.key4").as("key4"),
    ($"key1.key2.key5.key6").as("key6"),
    ($"key1.key2.key5.key7").as("key7"),
    ($"key1.key2.key5.values").as("values")).
    show(truncate=false)

+----+----+----+----+----------------------------------------------------------------------------+
|key3|key4|key6|key7|values                                                                      |
+----+----+----+----+----------------------------------------------------------------------------+
|AK  |EU  |001 |N   |[[valuesColumn1, 9.876], [valuesColumn2, 1.2345], [valuesColumn3, 8.675309]]|
+----+----+----+----+----------------------------------------------------------------------------+

这里有一个包含 3 个结构体的数组,但是这 3 个结构体需要动态地溢出到 3 个单独的列中(3 个的数量可能相差很大),我不知道该怎么做。

样本期望输出

请注意,为 values 数组中的每个数组元素生成了 3 个新列。

+----+----+----+----+-----------------------------------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-----------------------------------------+
|AK  |EU  |001 |N   |9.876        |1.2345        |8.675309    |
+----+----+----+----+-----------------------------------------+

参考

我认为所需的解决方案是 something similar to what was discussed in this SO post,但有两个主要区别:

  1. SO 帖子中的列数被硬编码为 3,但在我的情况下,数组元素的数量未知
  2. 列名需要由 name 列驱动,列值需要由 value 驱动。
...
 |    |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |    |-- name: string (nullable = true)
 |    |    |    |    |    |-- value: string (nullable = true)

2 个答案:

答案 0 :(得分:1)

你可以这样做:

val sac = new SparkContext("local[*]", " first Program");
val sqlc = new SQLContext(sac);
import sqlc.implicits._;
import org.apache.spark.sql.functions.split
import scala.math._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions.{ min, max }

val json = """{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}"""

val df1 = sqlc.read.json(Seq(json).toDS())

val df2 = df1.select(
    ($"key1.key2.key3").as("key3"),
    ($"key1.key2.key4").as("key4"),
    ($"key1.key2.key5.key6").as("key6"),
    ($"key1.key2.key5.key7").as("key7"),
    ($"key1.key2.key5.values").as("values")
)

val numColsVal = df2
    .withColumn("values_size", size($"values"))
    .agg(max($"values_size"))
    .head()
    .getInt(0)

val finalDFColumns = df2.select(explode($"values").as("values")).select("values.*").select("name").distinct.map(_.getAs[String](0)).orderBy($"value".asc).collect.foldLeft(df2.limit(0))((cdf, c) => cdf.withColumn(c, lit(null))).columns
val finalDF = df2.select($"*" +: (0 until numColsVal).map(i => $"values".getItem(i)("value").as($"values".getItem(i)("name").toString)): _*)
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).show(false)
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).drop($"values").show(false)

最终输出结果为:

+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
|AK  |EU  |001 |N   |9.876        |1.2345       |8.675309     |
+----+----+----+----+-------------+-------------+-------------+

希望我答对了你的问题!

----------- EDIT with Explanation----------

此块获取要为数组结构创建的列数。

val numColsVal = df2
        .withColumn("values_size", size($"values"))
        .agg(max($"values_size"))
        .head()
        .getInt(0)

finalDFColumns 是使用所有预期的列作为空值输出创建的 DF。

下面的块返回需要从数组结构中创建的不同列。

df2.select(explode($"values").as("values")).select("values.*").select("name").distinct.map(_.getAs[String](0)).orderBy($"value".asc).collect

下面的块将上述新列与 df2 中的其他列组合在一起,这些列用空/空值初始化。

foldLeft(df2.limit(0))((cdf, c) => cdf.withColumn(c, lit(null)))

如果您打印输出,则将这两个块组合起来:

+----+----+----+----+------+-------------+-------------+-------------+
|key3|key4|key6|key7|values|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+------+-------------+-------------+-------------+
+----+----+----+----+------+-------------+-------------+-------------+

现在我们已经准备好了结构。我们需要这里相应列的值。下面的块为我们提供了值:

df2.select($"*" +: (0 until numColsVal).map(i => $"values".getItem(i)("value").as($"values".getItem(i)("name").toString)): _*)

结果如下:

+----+----+----+----+--------------------+---------------+---------------+---------------+
|key3|key4|key6|key7|              values|values[0][name]|values[1][name]|values[2][name]|
+----+----+----+----+--------------------+---------------+---------------+---------------+
|  AK|  EU| 001|   N|[[valuesColumn1, ...|          9.876|         1.2345|       8.675309|
+----+----+----+----+--------------------+---------------+---------------+---------------+

现在我们需要像上面第一个块中那样重命名列。因此,我们将使用 zip 函数合并列,然后使用 foldLeft 方法重命名输出列,如下所示:

finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).show(false)

这导致以下结构:

+----+----+----+----+--------------------+-------------+-------------+-------------+
|key3|key4|key6|key7|              values|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+--------------------+-------------+-------------+-------------+
|  AK|  EU| 001|   N|[[valuesColumn1, ...|        9.876|       1.2345|     8.675309|
+----+----+----+----+--------------------+-------------+-------------+-------------+

我们快到了。我们现在只需要像这样删除不需要的 values 列:

finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).drop($"values").show(false)

从而导致预期的输出如下 -

+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
|AK  |EU  |001 |N   |9.876        |1.2345       |8.675309     |
+----+----+----+----+-------------+-------------+-------------+

我不确定我是否能够清楚地解释它。但是,如果您尝试打破上述语句/代码并尝试打印它,您将了解我们是如何到达输出的。您可以在互联网上找到此逻辑中使用的不同函数的示例说明。

答案 1 :(得分:0)

我发现这种方法效果更好,并且使用爆炸和枢轴更容易理解:

val json = """{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}"""

val df = spark.read.json(Seq(json).toDS())

// schema
df.printSchema
root
 |-- key1: struct (nullable = true)
 |    |-- key2: struct (nullable = true)
 |    |    |-- key3: string (nullable = true)
 |    |    |-- key4: string (nullable = true)
 |    |    |-- key5: struct (nullable = true)
 |    |    |    |-- key6: string (nullable = true)
 |    |    |    |-- key7: string (nullable = true)
 |    |    |    |-- values: array (nullable = true)
 |    |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |    |-- name: string (nullable = true)
 |    |    |    |    |    |-- value: string (nullable = true)

// create final df
val finalDf = df.
    select(
      $"key1.key2.key3".as("key3"),
      $"key1.key2.key4".as("key4"),
      $"key1.key2.key5.key6".as("key6"),
      $"key1.key2.key5.key7".as("key7"),
      explode($"key1.key2.key5.values").as("values")
    ).
    groupBy(
      $"key3", $"key4", $"key6", $"key7"
    ).
    pivot("values.name").
    agg(min("values.value")).alias("values.name")

// result
finalDf.show
+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
|  AK|  EU| 001|   N|        9.876|       1.2345|     8.675309|
+----+----+----+----+-------------+-------------+-------------+