我在使用Python中的多处理时遇到了一个问题,以便计算ecoinvent v3.2数据库中所有功能单元的LCA结果,以进行多次迭代。
代码如下:
for worker_id in range(CPUS):
# Create child processes that can work apart from parent process
child = mp.Process(target=worker_process, args=(projects.current, output_dir, worker_id, activities, ITERATIONS, status))
workers.append(child)
child.start()
print(workers)
while any(i.is_alive() for i in workers):
time.sleep(0.1)
while not status.empty():
# Flush queue of progress reports
worker, completed = status.get()
progress[worker] = completed
progbar.update(sum(progress.values()))
progbar.finish()
定义worker_process
函数如下:
def worker_process(project, output_dir, worker_id, activities, iterations, progress_queue):
# Project is string; project name in Brightway2
# output_dir is a string
# worker_id is an integer
# activities is a list of dictionaries
# iterations is an integer
# progress_queue is a Queue where we can report progress to parent process
projects.set_current(project, writable=False)
lca = DirectSolvingPVLCA(activities[0])
lca.load_data()
samples = np.empty((iterations, lca.params.shape[0]))
supply_arrays = np.empty((iterations, len(activities), len(lca.product_dict)))
for index in range(iterations):
lca.rebuild_all()
samples[index, :] = lca.sample
lca.decompose_technosphere()
for act_index, fu in enumerate(activities):
lca.build_demand_array(fu)
supply_arrays[index, act_index, :] = lca.solve_linear_system()
progress_queue.put((worker_id, index))
观察到的问题是:
对于两个以上的工人,除了两个以外的所有工人都会立即从MemoryError
死亡(见下文)。
对于幸存的两名工作人员来说,代码似乎适用于10,100或5000个功能单元,但是当我们要求所有FU时,它会分解并运行到相同的MemoryError
。
每个 X 进程都会出现MemoryError
:
Process Process-X:
Traceback (most recent call last):
File "C:\bw2-python\envs\bw2\lib\multiprocessing\process.py", line 254, in_bootstrap
self.run()
File "C:\bw2-python\envs\bw2\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\test\Do all the calculations.py", line 49, in worker_process
supply_arrays = np.empty((iterations, len(activities), len(lca.product_dict)))
MemoryError
我的问题是:
为什么会这样?
如何解决这个问题?
答案 0 :(得分:1)
由于使用的内存过多,导致内存不足。
使用以下方式分配新数组时:
np.empty((iterations, len(activities), len(lca.product_dict)))
activities
和lca.product_dict
的长度分别为10.000,你使用10.000 * 10.000 * 8(假设您的默认浮点数为64位,或8字节)= 800 MB ram per iteration和per worker process。
一个简单的解决方案是在具有大量RAM的服务器上工作。
在内存中创建这些大型数组的替代方法包括:
memmap
。在任何一种情况下,您都需要仔细测试为您的特定工作流程和操作系统编写和读取数据的最有效方法。