我正在对pandas数据帧进行多处理,将其分成几个数据帧,这些数据帧存储为列表。并且,使用Context.getExternalFilesDir()
我将数据帧传递给已定义的函数。我的输入文件大约是“300 mb”,所以小数据帧大约是“75 mb”。但是,当多处理运行时,内存消耗增加了7 GB,每个本地进程消耗大约1 GB。 2 GB的内存。为什么会这样?
Pool.map()
我的结果很好,但每个75 MB文件的内存消耗都很高。为什么这样 ?这是泄漏吗?有哪些可能的补救措施?
输出内存使用情况:
def main():
my_df = pd.read_table("my_file.txt", sep="\t")
my_df = my_df.groupby('someCol')
my_df_list = []
for colID, colData in my_df:
my_df_list.append(colData)
# now, multiprocess each small dataframe individually
p = Pool(3)
result = p.map(process_df, my_df_list)
p.close()
p.join()
print('Global maximum memory usage: %.2f (mb)' % current_mem_usage())
result_merged = pd.concat(result)
# write merged data to file
def process_df(my_df):
my_new_df = do something with "my_df"
print('\tWorker maximum memory usage: %.2f (mb)' % (current_mem_usage()))
del my_df
return my_new_df
#to monitor memory usage
def current_mem_usage():
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024.