PySpark将数组内的结构字段转换为字符串

时间:2019-01-24 09:47:07

标签: python pyspark

我有一个具有这样的架构的数据框:

|-- order: string (nullable = true)
|-- travel: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- place: struct (nullable = true)
 |    |    |    |-- name: string (nullable = true)
 |    |    |    |-- address: string (nullable = true)
 |    |    |    |-- latitude: double (nullable = true)
 |    |    |    |-- longitude: double (nullable = true)
 |    |    |-- distance_in_kms: float (nullable = true)
 |    |    |-- estimated_time: struct (nullable = true)
 |    |    |    |-- seconds: long (nullable = true)
 |    |    |    |-- nanos: integer (nullable = true)

我想在estimated_time中获得秒数,并将其转换为字符串并用s进行连接,然后将estimated_time替换为新的字符串值。例如,{ "seconds": "988", "nanos": "102" }将转换为988s,因此架构将更改为

|-- order: string (nullable = true)
|-- travel: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- place: struct (nullable = true)
 |    |    |    |-- name: string (nullable = true)
 |    |    |    |-- address: string (nullable = true)
 |    |    |    |-- latitude: double (nullable = true)
 |    |    |    |-- longitude: double (nullable = true)
 |    |    |-- distance_in_kms: float (nullable = true)
 |    |    |-- estimated_time: string (nullable = true)

我如何在PySpark中做到这一点?

更具体的示例,我想转换此DF(以JSON形式显示)

{
    "order": "c-331",
    "travel": [
        {
            "place": {
                "name": "A place",
                "address": "The address",
                "latitude": 0.0,
                "longitude": 0.0
            },
            "distance_in_kms": 1.0,
            "estimated_time": {
                "seconds": 988,
                "nanos": 102
            }
        }
    ]
}

进入

{
    "order": "c-331",
    "travel": [
        {
            "place": {
                "name": "A place",
                "address": "The address",
                "latitude": 0.0,
                "longitude": 0.0
            },
            "distance_in_kms": 1.0,
            "estimated_time": "988s"
        }
    ]
}

1 个答案:

答案 0 :(得分:2)

您可以使用以下pyspark函数进行此操作:

  • withColumn可让您创建一个新列。我们将使用它来提取“ estimated_time”
  • concat连接字符串列
  • lit创建一个给定字符串的列

请查看以下示例:

from pyspark.sql import functions as F
j = '{"order":"c-331","travel":[{"place":{"name":"A place","address":"The address","latitude":0.0,"longitude":0.0},"distance_in_kms":1.0,"estimated_time":{"seconds":988,"nanos":102}}]}'
df = spark.read.json(sc.parallelize([j]))

#the following command creates a new column called estimated_time2 which contains the values of travel.estimated_time.seconds concatenated with a 's' 
bla = df.withColumn('estimated_time2', F.concat(df.travel.estimated_time.seconds[0].cast("string"), F.lit("s")))

#unfortunately it is currently not possible to use withColumn to add a new member to a struct. Therefore the following command replaces 'travel.estimated_time' with the before created column estimated_time2
bla = bla.select("order"
                , F.array(
                    F.struct(
                        bla.travel.distance_in_kms[0].alias("distance_in_kms")
                        ,bla.travel.place[0].alias("place")
                        , bla.estimated_time2.alias('estimated_time')
                        )).alias("travel"))

bla.show(truncate=False)
bla.printSchema()

这就是输出:

+-----+------------------------------------------+ 
|order|travel                                    | 
+-----+------------------------------------------+ 
|c-331|[[1.0,[The address,0.0,0.0,A place],988s]]| 
+-----+------------------------------------------+


root 
|-- order: string (nullable = true) 
|-- travel: array (nullable = false) 
| |-- element: struct (containsNull = false) 
| | |-- distance_in_kms: double (nullable = true)
| | |-- place: struct (nullable = true) 
| | | |-- address: string (nullable = true) 
| | | |-- latitude: double (nullable = true) 
| | | |-- longitude: double (nullable = true) 
| | | |-- name: string (nullable = true) 
| | |-- estimated_time: string (nullable = true)