重新排列StrucType和嵌套的ArrayTypes

时间:2018-05-03 09:54:53

标签: python apache-spark pyspark spark-dataframe

我有一个带有架构的数据框:

root
 |-- col2: integer (nullable = true)
 |-- col1: integer (nullable = true)
 |-- structCol3: struct (nullable = true)
 |    |-- structField2: boolean (nullable = true)
 |    |-- structField1: string (nullable = true)
 |-- structCol3: struct (nullable = true)
 |    |-- nestedArray: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- elem3: double (nullable = true)
 |    |    |    |-- elem2: string (nullable = true)
 |    |    |    |-- elem1: string (nullable = true)
 |    |-- structField2: integer (nullable = true)

由于兼容性问题,我试图将其输出为镶木地板格式,但格式如下:

root
 |-- col1: integer (nullable = true) 
 |-- col2: integer (nullable = true)
 |-- structCol3: struct (nullable = true)
 |    |-- structField1: string (nullable = true)
 |    |-- structField2: boolean (nullable = true)
 |-- structCol4: struct (nullable = true)
 |    |-- nestedArray: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- elem1: string (nullable = true)
 |    |    |    |-- elem2: string (nullable = true)
 |    |    |    |-- elem3: double (nullable = true)
 |    |-- structField2: integer (nullable = true)

到目前为止,我已经成功地对结构中的列和字段进行了跟踪,如下所示:

dfParquetOutput = df.select(
    "col1",
    "col2",
    struct(
        col("structCol3.structField1"), 
        col("structCol3.structField2")
    ).alias("structCol3"),
    struct(
        col("structCol4.nestedArray"),
        col("structCol4.structField2")
    ).alias("structCol4")
)

不幸的是,我很难找到一种方法来重新排列Array中的StructType内的元素。我想过尝试使用udf,但由于我是一个很新的火花我没有成功的事情。我还尝试使用预定义的模式创建一个新的数据帧,但是从我的测试中,列是根据位置而不是名称来分配的。

是否有一种简单的方法可以在数组中重新排序Struct。

1 个答案:

答案 0 :(得分:1)

你不能在这里真正避免udf(或RDD)。如果您将数据定义为

from pyspark.sql.functions import udf, struct, col
from collections import namedtuple

Outer = namedtuple("Outer", ["structCol4"])
Inner = namedtuple("Inner", ["nestedArray", "structField2"])
Element = namedtuple("Element", ["col3", "col2", "col1"])

df = spark.createDataFrame([Outer(Inner([Element("3", "2", "1")], 1))])

你可以

@udf("array<struct<col1: string, col2: string, col3: string>>")
def reorder(arr):
    return [(col1, col2, col3) for col3, col2, col1 in arr]

result = df.withColumn(
    "structCol4", 
     struct(reorder("structCol4.nestedArray").alias("nestedArray"), col("structCol4.structField2")))

result.printSchema()
# root
#  |-- structCol4: struct (nullable = false)
#  |    |-- nestedArray: array (nullable = true)
#  |    |    |-- element: struct (containsNull = true)
#  |    |    |    |-- col1: string (nullable = true)
#  |    |    |    |-- col2: string (nullable = true)
#  |    |    |    |-- col3: string (nullable = true)
#  |    |-- structField2: long (nullable = true)
# 


result.show()
# +----------------+
# |      structCol4|
# +----------------+
# |[[[1, 2, 3]], 1]|
# +----------------+

使用深度嵌套的模式,您将在udf中重构完整的树,但这里不需要它。