PySpark:如何从Spark数据框架创建嵌套的JSON?

时间:2018-11-26 09:06:17

标签: python-3.x apache-spark pyspark pyspark-sql

我正在尝试从我的spark数据帧创建嵌套的json,该json具有以下结构的数据。下面的代码使用键和值创建一个简单的json。你能帮忙吗

df.coalesce(1).write.format('json').save(data_output_file+"createjson.json", overwrite=True)

Update1: 根据@MaxU的答案,我将spark数据帧转换为pandas并使用了group by。它将最后两个字段放入嵌套数组中。我如何首先将类别和计数放入嵌套数组,然后在该数组内部我想放入子类别和计数。

示例文本数据:

Vendor_Name,count,Categories,Category_Count,Subcategory,Subcategory_Count
Vendor1,10,Category 1,4,Sub Category 1,1
Vendor1,10,Category 1,4,Sub Category 2,2
Vendor1,10,Category 1,4,Sub Category 3,3
Vendor1,10,Category 1,4,Sub Category 4,4

j = (data_pd.groupby(['vendor_name','vendor_Cnt','Category','Category_cnt'], as_index=False)
             .apply(lambda x: x[['Subcategory','subcategory_cnt']].to_dict('r'))
             .reset_index()
             .rename(columns={0:'subcategories'})
             .to_json(orient='records'))

enter image description here

[{
        "vendor_name": "Vendor 1",
        "count": 10,
        "categories": [{
            "name": "Category 1",
            "count": 4,
            "subCategories": [{
                    "name": "Sub Category 1",
                    "count": 1
                },
                {
                    "name": "Sub Category 2",
                    "count": 1
                },
                {
                    "name": "Sub Category 3",
                    "count": 1
                },
                {
                    "name": "Sub Category 4",
                    "count": 1
                }
            ]
        }]

2 个答案:

答案 0 :(得分:2)

您需要为此重新构建整个数据框。

“ subCategories”是结构stype。

from pyspark.sql import functions as F
df.withColumn(
  "subCategories",
  F.struct(
    F.col("subCategories").alias("name"),
    F.col("subcategory_count").alias("count")
  )
)

然后,groupBy并使用F.collect_list创建数组。

最后,您只需在数据框中有1条记录即可获得预期的结果。

答案 1 :(得分:1)

在python / pandas中执行此操作的最简单方法是使用我认为的groupby使用一系列嵌套生成器:

def split_df(df):
    for (vendor, count), df_vendor in df.groupby(["Vendor_Name", "count"]):
        yield {
            "vendor_name": vendor,
            "count": count,
            "categories": list(split_category(df_vendor))
        }

def split_category(df_vendor):
    for (category, count), df_category in df_vendor.groupby(
        ["Categories", "Category_Count"]
    ):
        yield {
            "name": category,
            "count": count,
            "subCategories": list(split_subcategory(df_category)),
        }

def split_subcategory(df_category):
    for row in df.itertuples():
        yield {"name": row.Subcategory, "count": row.Subcategory_Count}

list(split_df(df))
[
    {
        "vendor_name": "Vendor1",
        "count": 10,
        "categories": [
            {
                "name": "Category 1",
                "count": 4,
                "subCategories": [
                    {"name": "Sub Category 1", "count": 1},
                    {"name": "Sub Category 2", "count": 2},
                    {"name": "Sub Category 3", "count": 3},
                    {"name": "Sub Category 4", "count": 4},
                ],
            }
        ],
    }
]

要将其导出到json,您需要一种方法来导出np.int64