使用excel文件时,Pandas耗时太长,占用太多内存

时间:2017-09-06 19:00:34

标签: json excel python-3.x pandas jupyter-notebook

尝试使用行数少于50k的Excel工作表。我想要做的是 - 使用特定列,我想获得所有唯一值,然后通过使用唯一值,我想得到包含该值的所有行,并将它们放在这种格式

[{
"unique_field_value": [Array containing row data that match the unique value as dictionaries]
},]

事情就是当我测试少行如1000行时一切顺利。随着数量的增长,内存使用量也会增加,直到它不再增长并且我的PC冻结。那么,大熊猫有什么不对的吗? 。这是我的平台的详细信息:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID_LIKE=debian
VERSION_ID="16.04"

这是我在Jupyter Notebook上运行的代码

import pandas as pd
import simplejson
import datetime

def datetime_handler(x):
    if isinstance(x, datetime.datetime):
        return x.isoformat()
    raise TypeError("Type not Known")

path = "/home/misachi/Downloads/new members/my_file.xls"
df = pd.read_excel(path, index_col=None, skiprows=[0])
df = df.dropna(thresh=5)
df2 = df.drop_duplicates(subset=['corporate'])

schemes = df2['corporate'].values

result_list = []
result_dict = {}

for count, name in enumerate(schemes):
    inner_dict = {}
    col_val = schemes[count]
    foo = df['corporate'] == col_val
    data = df[foo].to_json(orient='records', date_format='iso')
    result_dict[name] = simplejson.loads(data)
    result_list.append(result_dict)
#     print(result_list)
#     if count == 3:
#         break

dumped = simplejson.dumps(result_list, ignore_nan=True, default=datetime_handler)

with open('/home/misachi/Downloads/new members/members/folder/insurance.json', 'w') as json_f:
    json_f.write(dumped)

修改

以下是预期的样本输出

[{
    "TABBY MEMORIAL CATHEDRAL": [{
        "corp_id": 8494,
        "smart": null,
        "copay": null,
        "corporate": "TABBY MEMORIAL CATHEDRAL",
        "category": "CAT A",
        "member_names": "Brian Maombi",
        "member_no": "84984",
        "start_date": "2017-03-01T00:00:00.000Z",
        "end_date": "2018-02-28T00:00:00.000Z",
        "outpatient": "OUTPATIENT"
    }, {
        "corp_id": 8494,
        "smart": null,
        "copay": null,
        "corporate": "TABBY MEMORIAL CATHEDRAL",
        "category": "CAT A",
        "member_names": "Omula Peter",
        "member_no": "4784984",
        "start_date": "2017-03-01T00:00:00.000Z",
        "end_date": "2018-02-28T00:00:00.000Z",
        "outpatient": "OUTPATIENT"
    }],
    "CHECKIFY KENYA LTD": [{
        "corp_id": 7489,
        "smart": "SMART",
        "copay": null,
        "corporate": "CHECKIFY KENYA LTD",
        "category": "CAT A",
        "member_names": "BENARD KONYI",
        "member_no": "ABB/8439",
        "start_date": "2017-08-01T00:00:00.000Z",
        "end_date": "2018-07-31T00:00:00.000Z",
        "outpatient": "OUTPATIENT"
    }, {
        "corp_id": 7489,
        "smart": "SMART",
        "copay": null,
        "corporate": "CHECKIFY KENYA LTD",
        "category": "CAT A",
        "member_names": "KEVIN WACHAI",
        "member_no": "ABB/67484",
        "start_date": "2017-08-01T00:00:00.000Z",
        "end_date": "2018-07-31T00:00:00.000Z",
        "outpatient": "OUTPATIENT"
    }]
}]

完整而清晰的代码是:

import os
import pandas as pd
import simplejson
import datetime


def datetime_handler(x):
    if isinstance(x, datetime.datetime):
        return x.isoformat()
    raise TypeError("Unknown type")


def work_on_data(filename):
    if not os.path.isfile(filename):
        raise IOError
    df = pd.read_excel(filename, index_col=None, skiprows=[0])
    df = df.dropna(thresh=5)

    result_list = [{n: g.to_dict('records')} for n, g in df.groupby('corporate')]

    dumped = simplejson.dumps(result_list, ignore_nan=True, default=datetime_handler)
    return dumped
dumped = work_on_data('/home/misachi/Downloads/new members/my_file.xls')
with open('/home/misachi/Downloads/new members/members/folder/insurance.json', 'w') as json_f:
    json_f.write(dumped)

2 个答案:

答案 0 :(得分:1)

使用

获取字典
result_dict = [{n: g.to_dict('records') for n, g in df.groupby('corporate')}]

答案 1 :(得分:0)

使用read_excel()指定chunksize=10000参数并循环浏览文件,直至到达数据末尾。这将帮助您在处理大型文件时管理内存。如果您要管理多个工作表,请按照this example

进行操作
    for chunk in pd.read_excel(path, index_col=None, skiprows=[0] chunksize=10000):
        df = chunk.dropna(thresh=5)
        df2 = df.drop_duplicates(subset=['corporate'])
        # rest of your code