在Python中读取包含多个对象的大型JSON文件

时间:2018-04-12 17:55:32

标签: json python-3.x performance pandas-datareader

我有一个很大的GZ压缩JSON文件,其中每一行都是一个JSON对象(即一个python字典)。

以下是前两行的示例:

  {"ID_CLIENTE":"o+AKj6GUgHxcFuaRk6/GSvzEWRYPXDLjtJDI79c7ccE=","ORIGEN":"oaDdZDrQCwqvi1YhNkjIJulA8C0a4mMZ7ESVlEWGwAs=","DESTINO":"OOcb8QTlctDfYOwjBI02hUJ1o3Bro/ir6IsmZRigja0=","PRECIO":0.0023907284768211919,"RESERVA":"2015-05-20","SALIDA":"2015-07-26","LLEGADA":"2015-07-27","DISTANCIA":0.48962542317352847,"EDAD":"19","sexo":"F"}{"ID_CLIENTE":"WHDhaR12zCTCVnNC/sLYmN3PPR3+f3ViaqkCt6NC3mI=","ORIGEN":"gwhY9rjoMzkD3wObU5Ito98WDN/9AN5Xd5DZDFeTgZw=","DESTINO":"OOcb8QTlctDfYOwjBI02hUJ1o3Bro/ir6IsmZRigja0=","PRECIO":0.001103046357615894,"RESERVA":"2015-04-08","SALIDA":"2015-07-24","LLEGADA":"2015-07-24","DISTANCIA":0.21382548869717155,"EDAD":"13","sexo":"M"}

所以,我使用以下代码将每一行读入Pandas DataFrame:

import json
import gzip
import pandas as pd
import random

with gzip.GzipFile('data/000000000000.json.gz', 'r',) as fin:
    data_lan = pd.DataFrame()
    for line in fin:
        data_lan = pd.DataFrame([json.loads(line.decode('utf-8'))]).append(data_lan)

但这需要数年时间。 有什么建议可以更快地读取数据吗?

编辑: 最后是什么解决了这个问题:

import json
import gzip
import pandas as pd

with gzip.GzipFile('data/000000000000.json.gz', 'r',) as fin:
    data_lan = []
    for line in fin:
        data_lan.append(json.loads(line.decode('utf-8')))

data = pd.DataFrame(data_lan)

2 个答案:

答案 0 :(得分:0)

我自己也遇到过类似的问题,append()有点慢。我通常使用list dicts来加载json文件,然后立即创建Dataframe。通过这种方式,您可以获得列表为您提供的灵活性,并且只有当您确定列表中的数据时,才能将其转换为Dataframe。以下是该概念的实现:

import pandas as pd
import gzip


def get_contents_from_json(file_path)-> dict:
    """
    Reads the contents of the json file into a dict
    :param file_path:
    :return: A dictionary of all contents in the file.
    """
    try:
        with gzip.open(file_path) as file:
            contents = file.read()
        return json.loads(contents.decode('UTF-8'))
    except json.JSONDecodeError:
        print('Error while reading json file')
    except FileNotFoundError:
        print(f'The JSON file was not found at the given path: \n{file_path}')


def main(file_path: str):
    file_contents = get_contents_from_json(file_path)
    if not isinstance(file_contents,list):
        # I've considered you have a JSON Array in your file
        # if not let me know in the comments
        raise TypeError("The file doesn't have a JSON Array!!!")
    all_columns = file_contents[0].keys()
    data_frame = pd.DataFrame(columns=all_columns, data=file_contents)
    print(f'Loaded {int(data_frame.size / len(all_columns))} Rows', 'Done!', sep='\n')


if __name__ == '__main__':
    main(r'C:\Users\carrot\Desktop\dummyData.json.gz')

答案 1 :(得分:0)

pandas DataFrame适合连续的内存块,这意味着pandas在创建帧时需要知道数据集的大小。由于append会更改大小,因此必须分配新内存并复制原始加上新数据集。随着集合的增长,副本会变得越来越大。

您可以使用from_records来避免此问题。首先,您需要知道行数,这意味着扫描文件。如果经常这样做,你可能会缓存该数字,但它的运行速度相对较快。现在你有了大小,熊猫可以有效地分配内存。

# count rows
with gzip.GzipFile(file_to_test, 'r',) as fin:
    row_count = sum(1 for _ in fin)

# build dataframe from records
with gzip.GzipFile(file_to_test, 'r',) as fin:
    data_lan = pd.DataFrame.from_records(fin, nrows=row_count)