我有一个应用程序由于我认为列表中的数据过多而不断崩溃。
我正在从api调用xml数据,并将数据附加到列表中。我必须进行大约300K的api调用,我认为这导致我的应用程序崩溃,因为当我追加到列表中时,有太多数据。
以下是调用api并将结果保存到列表中的代码。 lst1
是我传递到api中的ID的列表。我必须考虑到,如果http请求超时,或者我可以建立一种机制来清除附加的数据列表,那么我可以从lst1
处传递的ID中重新启动请求,该ID传递到api网址。
import requests
import pandas as pd
import xml.etree.ElementTree as ET
from bs4 import BeautifulSoup
import time
from concurrent import futures
lst1=[1,2,3]
lst =[]
for i in lst1:
url = 'urlId={}'.format(i)
while True:
try:
xml_data1 = requests.get(url).text
print(xml_data1)
break
except requests.exceptions.RequestException as e:
print(e)
lst.append(xml_data1)
我在想,如果我可以应用下面的函数将xml解压缩到数据帧中并执行所需的数据帧操作,同时清除附加数据列表lst
,则它可以释放内存。如果不是这种情况,我愿意接受任何建议,以使代码或应用程序不会因我认为列表中的xml数据过多而崩溃:
def create_dataframe(xml):
soup = BeautifulSoup(xml, "xml")
# Get Attributes from all nodes
attrs = []
for elm in soup(): # soup() is equivalent to soup.find_all()
attrs.append(elm.attrs)
# Since you want the data in a dataframe, it makes sense for each field to be a new row consisting of all the other node attributes
fields_attribute_list = [x for x in attrs if 'Id' in x.keys()]
other_attribute_list = [x for x in attrs if 'Id' not in x.keys() and x != {}]
# Make a single dictionary with the attributes of all nodes except for the `Field` nodes.
attribute_dict = {}
for d in other_attribute_list:
for k, v in d.items():
attribute_dict.setdefault(k, v)
# Update each field row with attributes from all other nodes.
full_list = []
for field in fields_attribute_list:
field.update(attribute_dict)
full_list.append(field)
# Make Dataframe
df = pd.DataFrame(full_list)
return df
with futures.ThreadPoolExecutor() as executor: # Or use ProcessPoolExecutor
df_list = executor.map(create_dataframe, lst)
full_df = pd.concat(df_list)
print(full_df)
#final pivoted dataframe
final_df = pd.pivot_table(full_df, index='Id', columns='FieldTitle', values='Value', aggfunc='first').reset_index()