Pyspark:将CSV转换为嵌套JSON

时间:2019-12-09 16:23:22

标签: python json csv hadoop pyspark

我是pyspark的新手。我有一个需求,我需要将基于hdfs位置的CSV大文件转换为基于不同的primaryId的多个嵌套JSON文件。

示例输入:data.csv

**PrimaryId,FirstName,LastName,City,CarName,DogName**
100,John,Smith,NewYork,Toyota,Spike
100,John,Smith,NewYork,BMW,Spike
100,John,Smith,NewYork,Toyota,Rusty
100,John,Smith,NewYork,BMW,Rusty
101,Ben,Swan,Sydney,Volkswagen,Buddy
101,Ben,Swan,Sydney,Ford,Buddy
101,Ben,Swan,Sydney,Audi,Buddy
101,Ben,Swan,Sydney,Volkswagen,Max
101,Ben,Swan,Sydney,Ford,Max
101,Ben,Swan,Sydney,Audi,Max
102,Julia,Brown,London,Mini,Lucy

示例输出文件:

文件1: Output_100.json

{
    "100": [
        {
            "City": "NewYork", 
            "FirstName": "John", 
            "LastName": "Smith", 
            "CarName": [
                "Toyota", 
                "BMW"
            ], 
            "DogName": [
                "Spike", 
                "Rusty"
            ]
        }
}

文件2: Output_101.json

{
    "101": [
        {
            "City": "Sydney", 
            "FirstName": "Ben", 
            "LastName": "Swan", 
            "CarName": [
                "Volkswagen", 
                "Ford", 
                "Audi"
            ], 
            "DogName": [
                "Buddy", 
                "Max"
            ]
        }
}

文件3: Output_102.json

{
    "102": [
        {
            "City": "London", 
            "FirstName": "Julia", 
            "LastName": "Brown", 
            "CarName": [
                "Mini"
            ], 
            "DogName": [
                "Lucy"
            ]
        }
    ]
}

任何快速帮助将不胜感激。

2 个答案:

答案 0 :(得分:0)

似乎您需要对ID进行分组,并以集合的方式收集汽车和狗。

从pyspark.sql.functions导入collect_set

df = spark.read.format("csv").option("header", "true").load("cars.csv")
df2 = (
    df
    .groupBy("PrimaryId","FirstName","LastName")
    .agg(collect_set('CarName').alias('CarName'), collect_set('DogName').alias('DogName'))
)
df2.write.format("json").save("cars.json", mode="overwrite")

生成的文件:

{"PrimaryId":"100","FirstName":"John","LastName":"Smith","CarName":["Toyota","BMW"],"DogName":["Spike","Rusty"]}

{"PrimaryId":"101","FirstName":"Ben","LastName":"Swan","CarName":["Ford","Volkswagen","Audi"],"DogName":["Max","Buddy"]}

{"PrimaryId":"102","FirstName":"Julia","LastName":"Brown","CarName":["Mini"],"DogName":["Lucy"]}

让我知道这是否是您想要的。

答案 1 :(得分:0)

您可以使用pandas.groupby()对ID进行分组,然后遍历DataFrameGroupBy对象,以创建对象并写入文件。

您需要通过$ pip install pandas将pandas安装到virtualenv。

# coding: utf-8
import json
import pandas as pd


def group_csv_columns(csv_file):
    df = pd.read_csv(csv_file)
    group_frame = df.groupby(['PrimaryId'])

    for i in group_frame:
        data_frame = i[1]
        data = {}
        data[i[0]] = [{
            "City": data_frame['City'].unique().tolist()[0],
            "FirstName": data_frame['FirstName'].unique().tolist()[0],
            "CarName": data_frame['CarName'].unique().tolist(),
            'DogName': data_frame['DogName'].unique().tolist(),
            'LastName': data_frame['LastName'].unique().tolist()[0],
        }]
        # Write to file
        file_name = 'Output_' + str(i[0]) + '.json'
        with open(file_name, 'w') as fh:
            contents = json.dumps(data)
            fh.write(contents)


group_csv_columns('/tmp/sample.csv')

使用文件名和csv内容调用group_csv_columns()

检查pandas docs