使用多处理,我试图并行化一个函数,但我没有性能改进:
from MMTK import *
from MMTK.Trajectory import Trajectory, TrajectoryOutput, SnapshotGenerator
from MMTK.Proteins import Protein, PeptideChain
import numpy as np
filename = 'traj_prot_nojump.nc'
trajectory = Trajectory(None, filename)
def calpha_2dmap_mult(trajectory = trajectory, t = range(0,len(trajectory))):
dist = []
universe = trajectory.universe
proteins = universe.objectList(Protein)
chain = proteins[0][0]
traj = trajectory[t]
dt = 1000 # calculate distance every 1000 steps
for n, step in enumerate(traj):
if n % dt == 0:
universe.setConfiguration(step['configuration'])
for i in np.arange(len(chain)-1):
for j in np.arange(len(chain)-1):
dist.append(universe.distance(chain[i].peptide.C_alpha,
chain[j].peptide.C_alpha))
return(dist)
c0 = time.time()
dist1 = calpha_2dmap_mult(trajectory, range(0,11001))
c1 = time.time() - c0
print(c1)
# Multiprocessing
from multiprocessing import Pool, cpu_count
pool = Pool(processes=4)
c0 = time.time()
dist_pool = [pool.apply(calpha_2dmap_mult, args=(trajectory, t,)) for t in
[range(0,2001), range(3000,5001), range(6000,8001),
range(9000,11001)]]
c1 = time.time() - c0
print(c1)
计算距离所花费的时间是没有(70.1s)或多处理(70.2s)的“相同”!我可能不期待4因素的改善,但我至少期待一些改进! 有人知道我做错了吗?
答案 0 :(得分:4)
Pool.apply是阻止操作:
[
Pool.apply
相当于apply()内置函数。 它会阻塞,直到结果准备好,因此apply_async()
更适合并行执行工作..
在这种情况下,Pool.map
可能更适合收集结果;地图本身会阻塞但序列元素/转换是并行处理的。
除了使用部分应用程序(或手动实现此类)之外,还要考虑扩展数据本身。它是同一只猫在不同的皮肤。
data = ((trajectory, r) for r in [range(0,2001), ..])
result = pool.map(.., data)
这又可以扩展:
def apply_data(d):
return calpha_2dmap_mult(*d)
result = pool.map(apply_data, data)
需要编写函数(或类似的简单参数扩展代理)来接受单个参数,但现在所有数据都映射为单个单元。