Python3多处理共享对象

时间:2014-07-28 12:52:55

标签: python multiprocessing shared-memory python-3.2

在Python 3.2.3(在Debian 7.5上)使用multiprocessing模块时,我偶然发现了共享对象的同步问题。我把这个简单的例子放在一起来说明问题,它的功能类似于multiprocessing.Pool.map(我能想到的最简单)。我使用multiprocessing.Manager,因为我的原始代码使用它(通过网络同步)。但是如果我对计数器变量使用简单的multiprocessing.Value,行为就是一样的。

import os as os
import sys as sys
import multiprocessing as mp

def mp_map(function, obj_list, num_workers):
    """ 
    """
    mang = mp.Manager()
    jobq = mang.Queue()
    resq = mang.Queue()
    counter = mp.Value('i', num_workers, lock=True)
    finished = mang.Event()
    processes = []
    try:
        for i in range(num_workers):
            p = mp.Process(target=_parallel_execute, kwargs={'execfun':function, 'jobq':jobq, 'resq':resq, 'counter':counter, 'finished':finished})
            p.start()
            p.join(0)
            processes.append(p)
        for item in obj_list:
            jobq.put(item)
        for i in range(len(processes)):
            jobq.put('SENTINEL')
        finished.wait()
        for p in processes:
            if p.is_alive():
                p.join(1)
                p.terminate()
    except Exception as e:
        for p in processes:
            p.terminate()
        raise e
    results = []
    for item in iter(resq.get, 'DONE'):
        results.append(item)
    return results

def _parallel_execute(execfun, jobq, resq, counter, finished):
    """
    """
    for item in iter(jobq.get, 'SENTINEL'):
        item = execfun(item)
        resq.put(item)
    counter.value -= 1
    print('C: {}'.format(counter.value))
    if counter.value <= 0:
        resq.put('DONE')
        finished.set()
    return


if __name__ == '__main__':
    l = list(range(50))
    l = mp_map(id, l, 2)
    print('done')
    sys.exit(0)

运行上述代码几次会产生以下结果:

wks:~$ python3 mpmap.py 
C: 1
C: 0
done
wks:~$ python3 mpmap.py 
C: 1
C: 0
done
wks:~$ python3 mpmap.py 
C: 1
C: 1
Traceback (most recent call last):
  File "mpmap.py", line 55, in <module>
    l = mp_map(id, l, 2)
  File "mpmap.py", line 25, in mp_map
    finished.wait()
  File "/usr/lib/python3.2/multiprocessing/managers.py", line 1013, in wait
    return self._callmethod('wait', (timeout,))
  File "/usr/lib/python3.2/multiprocessing/managers.py", line 762, in _callmethod
    kind, result = conn.recv()
KeyboardInterrupt

根据multiprocessing模块的文档,我不明白为什么counter不是流程安全的,因为通过Manager管理对lock=True的访问权限很明显用multiprocessing.[Manager].Value初始化。由于死锁只是偶尔发生,我不确定如何解释这种行为。非常感谢任何有用的见解,谢谢。

修改 只是在google搜索后我发现了一个解释;如果其他人感兴趣,我将在此处分享:基于下面链接的博客条目1,在Python中完成的锁定(即在lock=True中使用{{1}})不会导致原子操作在例如示例中的共享值。解决方案是使用在用于控制对共享对象的访问的进程之间共享的另一个锁。

[http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing/]

1 个答案:

答案 0 :(得分:0)

正如罗斯所说,我在这里重复一遍答案: 简而言之,lock=Truemultiprocessing.Value的{​​{1}}不会导致例如一个原子操作的值的增量(减量) - 需要一个单独的锁来封装整个操作;有关代码示例,请参阅此回答https://stackoverflow.com/a/1233363/3826372或上述博客条目http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing