熊猫读了嵌套的json

时间:2016-11-14 12:31:38

标签: python json parsing pandas

我很好奇如何使用pandas来读取以下结构的嵌套json:

{
    "number": "",
    "date": "01.10.2016",
    "name": "R 3932",
    "locations": [
        {
            "depTimeDiffMin": "0",
            "name": "Spital am Pyhrn Bahnhof",
            "arrTime": "",
            "depTime": "06:32",
            "platform": "2",
            "stationIdx": "0",
            "arrTimeDiffMin": "",
            "track": "R 3932"
        },
        {
            "depTimeDiffMin": "0",
            "name": "Windischgarsten Bahnhof",
            "arrTime": "06:37",
            "depTime": "06:40",
            "platform": "2",
            "stationIdx": "1",
            "arrTimeDiffMin": "1",
            "track": ""
        },
        {
            "depTimeDiffMin": "",
            "name": "Linz/Donau Hbf",
            "arrTime": "08:24",
            "depTime": "",
            "platform": "1A-B",
            "stationIdx": "22",
            "arrTimeDiffMin": "1",
            "track": ""
        }
    ]
}

这里将数组保存为json。我宁愿把它扩展成列。

pd.read_json("/myJson.json", orient='records')

修改

感谢您的第一个答案。 我应该改进我的问题: 扁平化数组中的嵌套属性不是必需的。 只需[A,B,C]连接df.locations ['name']即可。

我的文件包含多个JSON对象(每行1个)我想保留数字,日期,名称和位置列。但是,我需要加入这些地点。

allLocations = ""
isFirst = True
for location in result.locations:
    if isFirst:
        isFirst = False
        allLocations = location['name']
    else:
        allLocations += "; " + location['name']
allLocations

我的方法似乎没有效率/熊猫风格。

2 个答案:

答案 0 :(得分:21)

您可以使用json_normalize

import json
from pandas.io.json import json_normalize    

with open('myJson.json') as data_file:    
    data = json.load(data_file)  

df = json_normalize(data, 'locations', ['date', 'number', 'name'], 
                    record_prefix='locations_')
print (df)
  locations_arrTime locations_arrTimeDiffMin locations_depTime  \
0                                                        06:32   
1             06:37                        1             06:40   
2             08:24                        1                     

  locations_depTimeDiffMin           locations_name locations_platform  \
0                        0  Spital am Pyhrn Bahnhof                  2   
1                        0  Windischgarsten Bahnhof                  2   
2                                    Linz/Donau Hbf               1A-B   

  locations_stationIdx locations_track number    name        date  
0                    0          R 3932         R 3932  01.10.2016  
1                    1                         R 3932  01.10.2016  
2                   22                         R 3932  01.10.2016 

编辑:

您可以read_json使用name构造函数解析DataFrame,使用join解析groupby

df = pd.read_json("myJson.json")
df.locations = pd.DataFrame(df.locations.values.tolist())['name']
df = df.groupby(['date','name','number'])['locations'].apply(','.join).reset_index()
print (df)
        date    name number                                          locations
0 2016-01-10  R 3932         Spital am Pyhrn Bahnhof,Windischgarsten Bahnho... 

答案 1 :(得分:0)

pandas.json_normalize的一种可能替代方法是通过仅从嵌套字典中提取选定的键和值来构建自己的数据框。这样做的主要原因是因为json_normalize对于非常大的json文件而言变慢(并且可能不会总是产生您想要的输出)。

因此,这是使用glom在熊猫中嵌套字典的另一种方法。目的是从嵌套字典中提取选定的键和值,并将其保存在pandas数据框的单独列中(:

以下是分步指南:https://medium.com/@enrico.alemani/flatten-nested-dictionaries-in-pandas-using-glom-7948345c88f5

import pandas as pd
from glom import glom
from ast import literal_eval


target = {
    "number": "",
    "date": "01.10.2016",
    "name": "R 3932",
    "locations":
        {
            "depTimeDiffMin": "0",
            "name": "Spital am Pyhrn Bahnhof",
            "arrTime": "",
            "depTime": "06:32",
            "platform": "2",
            "stationIdx": "0",
            "arrTimeDiffMin": "",
            "track": "R 3932"
        }
}   



# Import data
df = pd.DataFrame([str(target)], columns=['target'])

# Extract id keys and save value into a separate pandas column
df['id'] = df['target'].apply(lambda row: glom(literal_eval(row), 'locations.name'))