我想使用许多不同的参数组合来集成微分方程系统,并存储属于某一组参数的变量的最终值。因此,我实现了一个简单的for循环,其中创建了随机初始条件和参数组合,系统被集成,感兴趣的值存储在各自的数组中。 由于我打算为一个相当复杂的系统(这里我只使用玩具系统进行说明)的许多参数组合执行此操作,这也可能变得僵硬,我想并行化模拟以使用Python的“多处理”加速过程模块。
但是,当我运行模拟时,for循环总是比其并行版本更快。我到目前为止找到的比for-loop更快的唯一方法是使用“apply_async”而不是“apply”。对于10种不同的参数组合,我得到以下输出(使用下面的代码):
The for loop took 0.11986207962 seconds!
[ 41.75971761 48.06034375 38.74134139 25.6022232 46.48436046
46.34952734 50.9073202 48.26035086 50.05026187 41.79483135]
Using apply took 0.180637836456 seconds!
41.7597176061
48.0603437545
38.7413413879
25.6022231983
46.4843604574
46.3495273394
50.9073202011
48.2603508573
50.0502618731
41.7948313502
Using apply_async took 0.000414133071899 seconds!
41.7597176061
48.0603437545
38.7413413879
25.6022231983
46.4843604574
46.3495273394
50.9073202011
48.2603508573
50.0502618731
41.7948313502
虽然在这个例子中,“apply”和“apply_async”的结果顺序是相同的,但这似乎并非一般。因此,我想使用“apply_async”,因为它更快,但在这种情况下,我不知道如何将模拟的结果与我用于相应模拟的参数/初始条件相匹配。
我的问题是:
1)在这种情况下,为什么“应用”比简单的for-loop慢得多?
2)当我使用“apply_async”而不是“apply”时,并行化版本变得比for-loop快得多,但我怎样才能将模拟结果与我在相应模拟中使用的参数相匹配?
3)在这种情况下,“apply”和“apply_async”的结果具有相同的顺序。这是为什么?巧合?
我的代码可以在下面找到:
from pylab import *
import multiprocessing as mp
from scipy.integrate import odeint
import time
#my system of differential equations
def myODE (yn,tvec,allpara):
(x, y, z) = yn
a, b = allpara['para']
dx = -x + a*y + x*x*y
dy = b - a*y - x*x*y
dz = x*y
return (dx, dy, dz)
#for reproducibility
seed(0)
#time settings for integration
dt = 0.01
tmax = 50
tval = arange(0,tmax,dt)
numVar = 3 #number of variables (x, y, z)
numPar = 2 #number of parameters (a, b)
numComb = 10 #number of parameter combinations
INIT = zeros((numComb,numVar)) #initial conditions will be stored here
PARA = zeros((numComb,numPar)) #parameter combinations for a and b will be stored here
RES = zeros(numComb) #z(tmax) will be stored here
tic = time.time()
for combi in range(numComb):
INIT[combi,:] = append(10*rand(2),0) #initial conditions for x and y are randomly chosen, z is 0
PARA[combi,:] = 10*rand(2) #parameter a and b are chosen randomly
allpara = {'para': PARA[combi,:]}
results = transpose(odeint(myODE, INIT[combi,:], tval, args=(allpara,))) #integrate system
RES[combi] = results[numVar - 1][-1] #store z
#INIT[combi,:] = results[:,-1] #update initial conditions
#INIT[combi,-1] = 0 #set z to 0
toc = time.time()
print 'The for loop took ', toc-tic, 'seconds!'
print RES
#function for the multi-processing part
def runMyODE(yn,tvec,allpara):
return transpose(odeint(myODE, yn, tvec, args=(allpara,)))
tic = time.time()
pool = mp.Pool(processes=4)
results = [pool.apply(runMyODE, args=(INIT[combi,:],tval,{'para': PARA[combi,:]})) for combi in range(numComb)]
toc = time.time()
print 'Using apply took ', toc-tic, 'seconds!'
for sol in range(numComb):
print results[sol][2,-1] #print final value of z
tic = time.time()
resultsAsync = [pool.apply_async(runMyODE, args=(INIT[combi,:],tval,{'para': PARA[combi,:]})) for combi in range(numComb)]
toc = time.time()
print 'Using apply_async took ', toc-tic, 'seconds!'
for sol in range(numComb):
print resultsAsync[sol].get()[2,-1] #print final value of z
答案 0 :(得分:2)
请注意,apply_async的速度比for循环快289倍,这有点可疑!现在,您可以保证按照提交的顺序获得结果,即使这不是您想要的最大并行度。
apply_async启动任务,它不会等到它完成; .get()就是这么做的。所以这个:
tic = time.time()
resultsAsync = [pool.apply_async(runMyODE, args=(INIT[combi,:],tval,{'para': PARA[combi,:]})) for combi in range(numComb)]
toc = time.time()
这不是一个非常公平的衡量标准;你已经完成了所有的任务,但它们还没有完成。
另一方面,一旦你.get()结果,你知道任务已经完成,你有答案;这样做
for sol in range(numComb):
print resultsAsync[sol].get()[2,-1] #print final value of z
意味着肯定你的结果是有序的(因为你按顺序遍历ApplyResult对象并且.get()对它们进行处理);但是你可能希望在它们准备好后立即获得结果,而不是一次一步地阻塞等待。但这意味着您需要以某种方式用参数标记结果。
完成任务后,您可以使用回调来保存结果,并返回参数和结果,以允许完全异步返回:
def runMyODE(yn,tvec,allpara):
return allpara['para'],transpose(odeint(myODE, yn, tvec, args=(allpara,)))
asyncResults = []
def saveResult(result):
asyncResults.append((result[0], result[1][2,-1]))
tic = time.time()
for combi in range(numComb):
pool.apply_async(runMyODE, args=(INIT[combi,:],tval,{'para': PARA[combi,:]}), callback=saveResult)
pool.close()
pool.join()
toc = time.time()
print 'Using apply_async took ', toc-tic, 'seconds!'
for res in asyncResults:
print res[0], res[1]
给你一个更合理的时间;结果仍然几乎总是有序,因为任务需要非常相似的时间:
Using apply took 0.0847041606903 seconds!
[ 6.02763376 5.44883183] 41.7597176061
[ 4.37587211 8.91773001] 48.0603437545
[ 7.91725038 5.2889492 ] 38.7413413879
[ 0.71036058 0.871293 ] 25.6022231983
[ 7.78156751 8.70012148] 46.4843604574
[ 4.61479362 7.80529176] 46.3495273394
[ 1.43353287 9.44668917] 50.9073202011
[ 2.64555612 7.74233689] 48.2603508573
[ 0.187898 6.17635497] 50.0502618731
[ 9.43748079 6.81820299] 41.7948313502
Using apply_async took 0.0259671211243 seconds!
[ 4.37587211 8.91773001] 48.0603437545
[ 0.71036058 0.871293 ] 25.6022231983
[ 6.02763376 5.44883183] 41.7597176061
[ 7.91725038 5.2889492 ] 38.7413413879
[ 7.78156751 8.70012148] 46.4843604574
[ 4.61479362 7.80529176] 46.3495273394
[ 1.43353287 9.44668917] 50.9073202011
[ 2.64555612 7.74233689] 48.2603508573
[ 0.187898 6.17635497] 50.0502618731
[ 9.43748079 6.81820299] 41.7948313502
请注意,您可以使用map:
,而不是循环遍历applypool.map_async(lambda combi: runMyODE(INIT[combi,:], tval, para=PARA[combi,:]), range(numComb), callback=saveResult)