熊猫:从JSON文件中检索嵌套数据

时间:2016-04-04 11:22:17

标签: python json pandas dataframe

我正在解析来自here的嵌套JSON数据。此文件中的某些文件与其关联的committee_id个不止一个。我需要与每个文件关联的所有委员会。我不确定,但我想这意味着为每个committee_id写一个新行。我的代码如下:

import os.path
import csv
import json

path = '/home/jayaramdas/anaconda3/Thesis/govtrack/bills109/hr'
dirs = os.listdir(path)
outputfile = open('df/h109_s_b', 'w', newline='')                            
outputwriter = csv.writer(outputfile)

for dir in dirs:
    with open(path + "/" + dir + "/data.json", "r") as f:
        data = json.load(f)

        a = data['introduced_at']
        b = data['bill_id']
        c = data['sponsor']['thomas_id']
        d = data['sponsor']['state']
        e = data['sponsor']['name']
        f = data['sponsor']['type']
        i = data['subjects_top_term']   
        j = data['official_title']               

        if data['committees']:
            g = data['committees'][0]['committee_id']
        else:
            g = "None"                      
    outputwriter.writerow([a, b, c, d, e, f, g, i, j])
outputfile.close()       

我遇到的问题是我的代码只收集列出的第一个committee_id。例如,文件hr145如下所示:

 "committees": [
{
  "activity": [
    "referral", 
    "in committee"
  ], 
  "committee": "House Transportation and Infrastructure", 
  "committee_id": "HSPW"
}, 
{
  "activity": [
    "referral"
  ], 
  "committee": "House Transportation and Infrastructure", 
  "committee_id": "HSPW", 
  "subcommittee": "Subcommittee on Economic Development, Public Buildings and Emergency Management", 
  "subcommittee_id": "13"
}, 
{
  "activity": [
    "referral", 
    "in committee"
  ], 
  "committee": "House Financial Services", 
  "committee_id": "HSBA"
}, 
{
  "activity": [
    "referral"
  ], 
  "committee": "House Financial Services", 


  "committee_id": "HSBA", 
  "subcommittee": "Subcommittee on Domestic and International Monetary Policy, Trade, and Technology", 
  "subcommittee_id": "19"
}

这是一个有点棘手的地方,因为当账单传递给小组委员会时,我还希望subcommittee_idcommittee_id相关联:

bill_iid    committee   subcommittee    introduced at   Thomas_id   state   name
hr145-109   HSPW          na             "2005-01-4"         73      NY "McHugh, John M."
hr145-109   HSPW          13             "2005-01-4"         73      NY "McHugh, John M."
hr145-109   HSBA          na             "2005-01-4"         73      NY "McHugh, John M."
hr145-109   HSBA          19             "2005-01-4"         73      NY "McHugh, John M."

有什么想法吗?

1 个答案:

答案 0 :(得分:2)

你可以这样做:

In [111]: with open(fn) as f:
   .....:     data = ujson.load(f)
   .....:

In [112]: committees = pd.io.json.json_normalize(data, 'committees')

In [113]: committees
Out[113]:
             activity                                committee committee_id                            subcommittee subcommittee_id
0          [referral]                House Energy and Commerce         HSIF                                     NaN             NaN
1          [referral]                House Energy and Commerce         HSIF  Subcommittee on Energy and Air Quality              03
2          [referral]        House Education and the Workforce         HSED                                     NaN             NaN
3          [referral]                 House Financial Services         HSBA                                     NaN             NaN
4          [referral]                        House Agriculture         HSAG                                     NaN             NaN
5  [referral, markup]                          House Resources         HSII                                     NaN             NaN
6          [referral]                            House Science         HSSY                                     NaN             NaN
7          [referral]                     House Ways and Means         HSWM                                     NaN             NaN
8          [referral]  House Transportation and Infrastructure         HSPW                                     NaN             NaN

<强>更新

如果您想将所有数据放在一个DF中,您可以这样做:

import os
import ujson
import pandas as pd

start_path = '/home/jayaramdas/anaconda3/Thesis/govtrack/bills109/hr'

def get_merged_json(start_path):
    return [ujson.load(open(os.path.join(path, f)))
            for p, _, files in os.walk(start_path)
            for f in files
            if f.endswith('.json')
           ]

df = pd.read_json(ujson.dumps(data))

PS它会将所有committees作为JSON数据放在一列中,但