如果列表中的数据过多,如何防止spyder崩溃?

时间:2018-09-13 17:42:20

标签: python python-3.x memory spyder

我正在运行以下代码,以从Anaconda Spyder中的API获取XML对象。附加在列表中的XML对象会越来越大。由于某种原因,spyder一直崩溃,我认为列表越来越大。

我必须进行大约300K的api调用,这些调用需要我认为会导致应用程序崩溃的数据,因为当我追加到列表中时,数据太多了。

以下是调用api并将结果保存到列表中的代码。 lst1是我传递到api中的ID的列表。我必须考虑到,如果http请求超时,或者我可以建立一种机制来清除附加的数据列表,那么我可以从lst1处传递的ID中重新启动请求,该ID传递到api网址。

import requests
import pandas as pd
import xml.etree.ElementTree as ET
from bs4 import BeautifulSoup 
import time
from concurrent import futures

lst1=[1,2,3]

lst =[]

for i in lst1:
    url = 'urlId={}'.format(i)
    while True:
        try:
            xml_data1 = requests.get(url).text
            print(xml_data1)
            break
        except requests.exceptions.RequestException as e:
            print(e)
    lst.append(xml_data1)

我在想,如果我可以应用下面的函数将xml解压缩到数据帧中并执行所需的数据帧操作,同时清除附加数据列表lst,则它可以释放内存。如果不是这种情况,我愿意接受任何建议,以使代码或应用程序不会因我认为列表中的xml数据过多而崩溃:

def create_dataframe(xml):
    soup = BeautifulSoup(xml, "xml")
    # Get Attributes from all nodes
    attrs = []
    for elm in soup():  # soup() is equivalent to soup.find_all()
        attrs.append(elm.attrs)
    # Since you want the data in a dataframe, it makes sense for each field to be a new row consisting of all the other node attributes
    fields_attribute_list = [x for x in attrs if 'Id' in x.keys()]
    other_attribute_list = [x for x in attrs if 'Id' not in x.keys() and x != {}]
    # Make a single dictionary with the attributes of all nodes except for the `Field` nodes.
    attribute_dict = {}
    for d in other_attribute_list:
        for k, v in d.items():
            attribute_dict.setdefault(k, v)
    # Update each field row with attributes from all other nodes.
    full_list = []
    for field in fields_attribute_list:
        field.update(attribute_dict)
        full_list.append(field)
    # Make Dataframe
    df = pd.DataFrame(full_list)
    return df


with futures.ThreadPoolExecutor() as executor:  # Or use ProcessPoolExecutor
    df_list = executor.map(create_dataframe, lst)

full_df = pd.concat(df_list)
print(full_df)


#final pivoted dataframe
final_df = pd.pivot_table(full_df, index='Id', columns='FieldTitle', values='Value', aggfunc='first').reset_index()

0 个答案:

没有答案