Json到CSV的转换在python中的大文件上花费了很多时间

时间:2018-11-26 18:44:14

标签: python json pandas csv

我正在尝试将一个非常大的json文件转换为csv。该代码适用于较小的文件,但是在较大的文件上运行相同的代码需要花费大量时间 我首先在包含80,000个条目的91 mb文件上进行了测试,大约花费了45分钟,但之后,对于包含300,000个条目的更大文件,则花费了大约5个小时。有什么办法可以通过多重处理做到这一点?我是Python初学者,所以不知道在python中使用多处理或多线程。这是我的代码

import json
import time
import pandas as pd

csv_project=pd.DataFrame([],columns=['abstract','authors','n_citation',"references","title","venue","year",'id'])


with open('test.json','r') as f:
    data = f.readlines()
j=0
for k,i in enumerate(data):

    if '{' in i and '}' in i:

        j+=1
        dictionary=json.loads(i)
        csv_project=csv_project.append(dictionary,ignore_index=True)
    else:
        pass 
    if j == 10000:
        print(str(k)+'number of entries done')
        csv_project.to_csv('data.csv')
        j=0
csv_project.to_csv('data.csv') 

任何有用的帮助将不胜感激。 这里是示例json格式。

    {"abstract": "AdaBoost algorithm based on Haar-like features can achieves high accuracy (above 95%) in object detection.", 
"authors": ["Zheng Xu", "Runbin Shi", "Zhihao Sun", "Yaqi Li", "Yuanjia Zhao", "Chenjian Wu"], 
"n_citation": 0,
 "references": ["0a11984c-ab6e-4b75-9291-e1b700c98d52", "1f4152a3-481f-4adf-a29a-2193a3d4303c", "3c2ddf0a-237b-4d17-8083-c90df5f3514b", "522ce553-29ea-4e0b-9ad3-0ed4eb9de065", "579e5f24-5b13-4e92-b255-0c46d066e306", "5d0b987d-eed9-42ce-9bf3-734d98824f1b", "80656b4d-b24c-4d92-8753-bdb965bcd50a", "d6e37fb1-5f7e-448e-847b-7d1f1271c574"],
 "title": "A Heterogeneous System for Real-Time Detection with AdaBoost",
 "venue": "high performance computing and communications",
 "year": 2016,
 "id": "001eef4f-1d00-4ae6-8b4f-7e66344bbc6e"}


{"abstract": "In this paper, a kind of novel jigsaw EBG structure is designed and applied into conformal antenna array",
 "authors": ["Yufei Liang", "Yan Zhang", "Tao Dong", "Shan-wei Lu"], 
"n_citation": 0, 
"references": [], 
"title": "A novel conformal jigsaw EBG structure design", 
"venue": "international conference on conceptual structures", 
"year": 2016, 
"id": "002e0b7e-d62f-4140-b015-1fe29a9acbaa"}

2 个答案:

答案 0 :(得分:1)

您将所有数据保存在内存中,一次作为行,一次作为数据帧。这可能会减慢您的处理速度。

使用csv模块可以让您以流模式处理文件:

import json
import csv

with open('test.json') as lines, open('data.csv', 'w') as output:
    output = csv.DictWriter(output, ['abstract','authors','n_citation',"references","title","venue","year",'id'])
    output.writeheader()
    for line in lines:
        line = line.strip()
        if line[0] == '{' and line[-1] == '}':
            output.writerow(json.loads(line))

答案 1 :(得分:0)

似乎您正在读取json lines文件,该文件可能看起来像这样:

{key1: value1, key2: [value2, value3, value4], key3: value3}
{key1: value4, key2: [value5, value6], key3: value7}

请注意,末尾没有逗号,并且每一行本身都是有效的json格式。

幸运的是,熊猫可以像这样直接读取json lines文件:

pd.read_json('test.json', lines=True)

由于列名与json键完全相同,因此您无需提前设置空白的DataFrameread_json将为您完成所有解析。示例:

df = pd.read_json('test.json', lines=True)
print(df)

                                            abstract  ...   year
0  AdaBoost algorithm based on Haar-like features...  ...   2016
1  In this paper, a kind of novel jigsaw EBG stru...  ...   2016

[2 rows x 8 columns]

更幸运的是,如果您受到大小的限制,则可以使用一个chunksize参数将.read_json方法转换为生成器:

json_reader = pd.read_json('test.json', lines=True, chunksize=10000)

现在,当您遍历json_reader时,每次它将输出DataFrame文件中的下10,000行中的json。示例:

for j in json_reader:
  print(j)

                                            abstract  ...   year
0  AdaBoost algorithm based on Haar-like features...  ...   2016
1  In this paper, a kind of novel jigsaw EBG stru...  ...   2016

[2 rows x 8 columns]
                                            abstract  ...   year
2  AdaBoost algorithm based on Haar-like features...  ...   2016
3  In this paper, a kind of novel jigsaw EBG stru...  ...   2016

[2 rows x 8 columns]
                                            abstract  ...   year
4  AdaBoost algorithm based on Haar-like features...  ...   2016
5  In this paper, a kind of novel jigsaw EBG stru...  ...   2016

[2 rows x 8 columns]

结合所有这些新发现的知识,您可以使用chunksize=10000并将成块的DataFrame作为单独的csv输出,如下所示:

for i, df in enumerate(json_reader):
  df.to_csv('my_csv_file_{}'.format(i))

在这里,您注意到我组合了enumerate()函数,以便我们可以得到一个自动递增的索引号,以及str.format()函数将索引号附加到生成的csv文件中。

You can see an example here on Repl.it.