我必须创建并填充巨大的(例如。 96 Go,72000行* 72000列)数组,每种情况都有来自数学公式的浮点数。该数组将在之后计算。
import itertools, operator, time, copy, os, sys
import numpy
from multiprocessing import Pool
def f2(x): # more complex mathematical formulas that change according to values in *i* and *x*
temp=[]
for i in combine:
temp.append(0.2*x[1]*i[1]/64.23)
return temp
def combinations_with_replacement_counts(n, r): #provide all combinations of r balls in n boxes
size = n + r - 1
for indices in itertools.combinations(range(size), n-1):
starts = [0] + [index+1 for index in indices]
stops = indices + (size,)
yield tuple(map(operator.sub, stops, starts))
global combine
combine = list(combinations_with_replacement_counts(3, 60)) #here putted 60 but need 350 instead
print len(combine)
if __name__ == '__main__':
t1=time.time()
pool = Pool() # start worker processes
results = [pool.apply_async(f2, (x,)) for x in combine]
roots = [r.get() for r in results]
print roots [0:3]
pool.close()
pool.join()
print time.time()-t1
答案 0 :(得分:1)
我知道您可以创建可以从不同线程更改的共享numpy数组(假设更改的区域不重叠)。下面是您可以使用的代码草图(我在stackoverflow上看到了原始想法,编辑:这里是https://stackoverflow.com/a/5550156/1269140)
import multiprocessing as mp ,numpy as np, ctypes
def shared_zeros(n1, n2):
# create a 2D numpy array which can be then changed in different threads
shared_array_base = mp.Array(ctypes.c_double, n1 * n2)
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(n1, n2)
return shared_array
class singleton:
arr = None
def dosomething(i):
# do something with singleton.arr
singleton.arr[i,:] = i
return i
def main():
singleton.arr=shared_zeros(1000,1000)
pool = mp.Pool(16)
pool.map(dosomething, range(1000))
if __name__=='__main__':
main()
答案 1 :(得分:0)
您可以创建具有所需形状的空numpy.memmap
数组,然后使用multiprocessing.Pool
填充其值。正确地执行此操作还会使池中每个进程的内存占用量相对较小。