目标是创建一个大数据框,我可以在其上执行操作,例如平均每列的行等。
问题是随着数据框的增加,每次迭代的速度也会增加,所以我无法完成计算。
注意:我的df
只有两列,其中col1
是不必要的,因此我加入了它。 col1
是一个字符串,col2
是一个浮点数。行数是3k。以下是一个例子:
folder_paths float
folder/Path 1.12630137
folder/Path2 1.067517426
folder/Path3 1.06443264
folder/Path4 1.049119625
folder/Path5 1.039635769
问题有关如何提高此代码效率以及瓶颈在哪里的任何想法?此外,我不确定merge
是否可行。
当前的想法我正在考虑的一个解决方案是分配内存并指定列类型:col1
是一个字符串,col2
是一个浮点数。< / p>
df = pd.DataFrame() # create an empty data frame
for i in range(1000):
if i is 0:
df = generate_new_df(arg1, arg2)
else:
df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')
我也试过使用pd.concat
,但结果非常相似:每次迭代后的时间增加
df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)
使用pd.concat 结果
使用列表的 结果: run 1
time 0.34s
run 2
time 0.34s
run 3
time 0.32s
run 4
time 0.33s
run 5
time 0.42s
run 6
time 0.41s
run 7
time 0.45s
run 8
time 0.46s
run 9
time 0.54s
run 10
time 0.58s
run 11
time 0.73s
run 12
time 0.72s
run 13
time 0.79s
run 14
time 0.87s
run 15
time 0.95s
run 16
time 1.06s
run 17
time 1.19s
run 18
time 1.24s
run 19
time 1.37s
run 20
time 1.57s
run 21
time 1.68s
run 22
time 1.93s
run 23
time 1.86s
run 24
time 1.96s
run 25
time 2.11s
run 26
time 2.32s
run 27
time 2.42s
run 28
time 2.57s
dfList
和pd.concat
会产生类似的结果。以下是代码&amp;结果dfList=[]
for i in range(1000):
dfList.append(generate_new_df(arg1, arg2))
df = pd.concat(dfList, axis=1)
run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.
答案 0 :(得分:1)
目前还不清楚你的问题究竟是什么,但我会假设你的主要瓶颈在于你是在尝试将大量数据帧同时加载到一个列表中并且你正在运行内存/分页的问题。考虑到这一点,这是一种可能有用的方法,但您必须自己测试,因为我无法访问您的generate_new_df
函数或您的数据。
方法是使用来自this answer的merge_with_concat
函数的变体,并将最初的较小数量的数据框合并在一起,然后将它们全部合并在一起。
例如,如果您有1000个数据帧,则可以一次合并100个,以便为您提供10个大数据帧,然后将最后10个数据帧合并为最后一步。这应该确保您没有任何一个点太大的数据帧列表。
您可以使用下面的两个函数(我假设您的generate_new_df
函数将文件名作为其参数之一)并执行以下操作:
def chunk_dfs(file_names, chunk_size):
"""" yields n dataframes at a time where n == chunksize """
dfs = []
for f in file_names:
dfs.append(generate_new_df(f))
if len(dfs) == chunk_size:
yield dfs
dfs = []
if dfs:
yield dfs
def merge_with_concat(dfs, col):
dfs = (df.set_index(col, drop=True) for df in dfs)
merged = pd.concat(dfs, axis=1, join='outer', copy=False)
return merged.reset_index(drop=False)
col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100
merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)