使用Python Process

时间:2017-04-10 08:04:07

标签: linux windows python-3.x process multiprocessing

我正在尝试加速一些应该在Linux和Windows上快速运行的代码。但是,在Fedora 25中运行的相同代码需要131秒,而在Windows 7中仅需90秒(两台计算机分别具有8Gb RAM和i7和i5处理器)。我在Fedora中使用Python 3.5,在Windows中使用3.6。

代码如下:

nprocs = cpu_count()
chunksize = ceil(nrFrames / nprocs)
queue = Queue()
jobs = []

for i in range(nprocs):
    start = chunksize * i
    if i == nprocs - 1:
        end = nrFrames
    else:
        end = chunksize * (i + 1)
    trjCoordsProcess = DAH_Coords[start:end]
    p = Process(target=is_hbond, args=(queue, trjCoordsProcess, distCutOff, 
                angleCutOff, AList, DList, HList))
    jobs.append(p)

HbondFreqMatrix = queue.get()
for k in range(nprocs-1):
    HbondFreqMatrix = np.add(HbondFreqMatrix, queue.get())

for proc in jobs:
    proc.join()


def is_hbond(queue, processCoords, distCutOff, angleCutOff, 
             possibleAPosList, donorsList, HCovBoundPosList):

    for frame in range(len(processCoords)):
        # do stuff

    queue.put(HbondProcessFreqMatrix)

Linux中每个进程的启动方法实际上比在Windows中快得多。但是,is_hbond函数内的每次迭代在Linux中需要2.5倍(0.5 vs 0.2s)。

剖析器提供以下信息:

Ordered by: cumulative time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.167    0.167   84.139   84.139  calculateHbonds
    4    0.000    0.000   52.039   13.010  \Python36\lib\multiprocessing\ queues.py:91(get)
    4    0.000    0.000   51.928   12.982   \Python36\lib\multiprocessing\ connection.py:208(recv_bytes)
    4    0.018    0.004   51.928   12.982   \Python36\lib\multiprocessing\ connection.py:294(_recv_bytes)
    4   51.713   12.928   51.713   12.928   {built-in method _winapi.WaitForMultipleObjects}
    4    0.000    0.000   30.811    7.703   \Python36\lib\multiprocessing\ process.py:95(start)
    4    0.000    0.000   30.811    7.703   \Python36\lib\multiprocessing\ context.py:221(_Popen)
    4    0.000    0.000   30.811    7.703   \Python36\lib\multiprocessing\ context.py:319(_Popen)
    4    0.000    0.000   30.809    7.702   popen_spawn_win32.py:32(__init__)
    8    1.958    0.245   30.804    3.851   \Python36\lib\multiprocessing\ reduction.py:58(dump)
    8   28.846    3.606   28.846    3.606   {method 'dump' of '_pickle.Pickler' objects}

的Linux

Ordered by: cumulative time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.203    0.203  123.169  123.169  calculateHbonds
    4    0.000    0.000  121.450   30.362  /python3.5/multiprocessing/ queues.py:91(get)
    4    0.000    0.000  121.300   30.325  /python3.5/multiprocessing/ connection.py:208(recv_bytes)
    4    0.019    0.005  121.300   30.325  /python3.5/multiprocessing/ connection.py:406(_recv_bytes)
    8    0.000    0.000  121.281   15.160  /python3.5/multiprocessing/ connection.py:374(_recv)
    8  121.088   15.136  121.088   15.136  {built-in method posix.read}

    1    0.000    0.000    0.082    0.082  /python3.5/multiprocessing/ context.py:98(Queue)
 17/4    0.000    0.000    0.082    0.021  <frozen importlib._bootstrap>: 939(_find_and_load_unlocked)
 16/4    0.000    0.000    0.082    0.020  <frozen importlib._bootstrap>: 659(_load_unlocked)
    4    0.000    0.000    0.052    0.013  /python3.5/multiprocessing/ process.py:95(start)
    4    0.000    0.000    0.052    0.013  /python3.5/multiprocessing/ context.py:210(_Popen)
    4    0.000    0.000    0.052    0.013  /python3.5/multiprocessing/ context.py:264(_Popen)
    4    0.000    0.000    0.051    0.013  /python3.5/multiprocessing/ popen_fork.py:16(__init__)
    4    0.000    0.000    0.051    0.013  /python3.5/multiprocessing/ popen_fork.py:64(_launch)
    4    0.050    0.013    0.050    0.013 {built-in method posix.fork}

有可能出现这种情况的原因吗?我知道由于Windows中缺少os.fork,多处理模块在Linux和Windows中的工作方式不同,但我认为Linux应该更快。 关于如何在Linux中加快速度的任何想法? 谢谢!

0 个答案:

没有答案