将嵌套的 Json 转换为 Pyspark 中的数据帧

时间:2021-05-07 14:20:57

标签: json dataframe pyspark

我正在尝试从具有嵌套字段和日期字段的 json 创建数据框,我想将其连接:

root
 |-- MODEL: string (nullable = true)
 |-- CODE: string (nullable = true)
 |-- START_Time: struct (nullable = true)
 |    |-- day: string (nullable = true)
 |    |-- hour: string (nullable = true)
 |    |-- minute: string (nullable = true)
 |    |-- month: string (nullable = true)
 |    |-- second: string (nullable = true)
 |    |-- year: string (nullable = true)
 |-- WEIGHT: string (nullable = true)
 |-- REGISTED: struct (nullable = true)
 |    |-- day: string (nullable = true)
 |    |-- hour: string (nullable = true)
 |    |-- minute: string (nullable = true)
 |    |-- month: string (nullable = true)
 |    |-- second: string (nullable = true)
 |    |-- year: string (nullable = true)
 |-- TOTAL: string (nullable = true)
 |-- SCHEDULED: struct (nullable = true)
 |    |-- day: long (nullable = true)
 |    |-- hour: long (nullable = true)
 |    |-- minute: long (nullable = true)
 |    |-- month: long (nullable = true)
 |    |-- second: long (nullable = true)
 |    |-- year: long (nullable = true)
 |-- PACKAGE: string (nullable = true)

目标是获得更像:

+---------+------------------+----------+-----------------+---------+-----------------+
|MODEL    |   START_Time     | WEIGHT   |REGISTED         |TOTAL    |SCHEDULED        |   
+---------+------------------+----------+-----------------+---------+-----------------+
|.........| yy-mm-dd-hh-mm-ss| WEIGHT   |yy-mm-dd-hh-mm-ss|TOTAL    |yy-mm-dd-hh-mm-ss| 

其中 yy-mm-dd-hh-mm-ss 是 json 中的日、小时、分钟....

|-- example: struct (nullable = true)
 |    |-- day: string (nullable = true)
 |    |-- hour: string (nullable = true)
 |    |-- minute: string (nullable = true)
 |    |-- month: string (nullable = true)
 |    |-- second: string (nullable = true)
 |    |-- year: string (nullable = true)

我尝试过explode功能可能没有按预期使用但没有用 谁能激励我寻求解决方案 谢谢

1 个答案:

答案 0 :(得分:0)

您可以通过以下简单步骤完成。

  1. 让我们在 data.json 文件中有如下数据
<块引用>

{"MODEL": "abc", "CODE": "CODE1", "START_Time": {"day": "05", "hour": "08", "minute": "30", "月”:“08”,“秒”:“30”,“年”:“21”},“重量”:“231”,“注册”:{“日”:“05”,“小时”:“ 08", "minute": "30", "month": "08", "second": "30", "year": "21"}, "TOTAL": "1", "SCHEDULED": {"日”:“05”,“小时”:“08”,“分”:“30”,“月”:“08”,“秒”:“30”,“年”:“21”},“PACKAGE” ": "汽车"}

此数据与您共享的架构相同。

  1. 在 pyspark 中读取这个 json 文件,如下所示。

    from pyspark.sql.functions import *
    
    df = spark.read.json('data.json')
    
  2. 现在您可以读取嵌套值并修改列值,如下所示。

    df.withColumn('START_Time', concat(col('START_Time.year'), lit('-'), col('START_Time.month'), lit('-'), col('START_Time.day'), lit('-'), col('START_Time.hour'), lit('-'), col('START_Time.minute'), lit('-'), col('START_Time.second'))).withColumn('REGISTED',concat(col('REGISTED.year'), lit('-'), col('REGISTED.month'), lit('-'), col('REGISTED.day'), lit('-'), col('REGISTED.hour'), lit('-'), col('REGISTED.minute'), lit('-'), col('REGISTED.second'))).withColumn('SCHEDULED',concat(col('SCHEDULED.year'), lit('-'), col('SCHEDULED.month'), lit('-'), col('SCHEDULED.day'), lit('-'), col('SCHEDULED.hour'), lit('-'), col('SCHEDULED.minute'), lit('-'), col('SCHEDULED.second'))).show()
    

输出为

<头>
代码 型号 包装 注册 预定 START_时间 总计 重量
CODE1 ABC 汽车 21-08-05-08-30-30 21-08-05-08-30-30 21-08-05-08-30-30 1 231