具有列表清单的数据框如何将每一行分解为列-pyspark

时间:2018-10-17 19:39:51

标签: pyspark pyspark-sql

我的数据框如下

--------------------+
|                pas1|
+--------------------+
|[[[[H, 5, 16, 201...|
|[, 1956-09-22, AD...|
|[, 1961-03-19, AD...|
|[, 1962-02-09, AD...|
+--------------------+

想从4行以上的每一行中提取几列,并创建如下的数据框。列名称应来自架构,而不是诸如column1和column2之类的硬编码名称。

---------|-----------+
| gender | givenName |
+--------|-----------+
|      a |       b   |
|      a |       b   |
|      a |       b   |
|      a |       b   |
+--------------------+

pas1 - schema
root
|-- pas1: struct (nullable = true)
|    |-- contactList: struct (nullable = true)
|    |    |-- contact: array (nullable = true)
|    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |-- contactTypeCode: string (nullable = true)
|    |    |    |    |-- contactMediumTypeCode: string (nullable = true)
|    |    |    |    |-- contactTypeID: string (nullable = true)
|    |    |    |    |-- lastUpdateTimestamp: string (nullable = true)
|    |    |    |    |-- contactInformation: string (nullable = true)
|    |-- dateOfBirth: string (nullable = true)
|    |-- farePassengerTypeCode: string (nullable = true)
|    |-- gender: string (nullable = true)
|    |-- givenName: string (nullable = true)
|    |-- groupDepositIndicator: string (nullable = true)
|    |-- infantIndicator: string (nullable = true)
|    |-- lastUpdateTimestamp: string (nullable = true)
|    |-- passengerFOPList: struct (nullable = true)
|    |    |-- passengerFOP: struct (nullable = true)
|    |    |    |-- fopID: string (nullable = true)
|    |    |    |-- lastUpdateTimestamp: string (nullable = true)
|    |    |    |-- fopFreeText: string (nullable = true)
|    |    |    |-- fopSupplementaryInfoList: struct (nullable = true)
|    |    |    |    |-- fopSupplementaryInfo: array (nullable = true)
|    |    |    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |    |    |-- type: string (nullable = true)
|    |    |    |    |    |    |-- value: string (nullable = true)

感谢您的帮助

1 个答案:

答案 0 :(得分:0)

如果要从包含结构的数据框中提取几列,则可以执行以下操作:

from pyspark.sql import SparkSession,Row
spark = SparkSession.builder.appName('Test').getOrCreate()
df = spark.sparkContext.parallelize([Row(pas1=Row(gender='a', givenName='b'))]).toDF()

df.select('pas1.gender','pas1.givenName').show()

相反,如果您想展平数据框,则此问题将为您提供帮助:How to unwrap nested Struct column into multiple columns?