获取锁定时,Python多处理锁定机制失败

时间:2014-01-29 01:33:26

标签: python concurrency locking multiprocessing python-multithreading

我正在尝试实现一个可以访问共享数据资源的多处理应用程序。我正在使用锁定机制来确保安全地访问共享资源。但是我遇到了错误。令人惊讶的是,如果进程1首先获取锁定,则它正在为请求提供服务,并且它正在尝试获取锁定的下一个进程失败。但是,如果除1之外的某个其他进程首先尝试获取锁定,则它在第一次运行时失败。我是python的新手,并使用文档来实现这一点所以我不知道我是否缺少任何基本的安全机制。任何数据点为什么我目睹这将是非常有帮助的

方案:

#!/usr/bin/python
from multiprocessing import Process, Manager, Lock
import os
import Queue
import time
lock = Lock()
def launch_worker(d,l,index):
    global lock
    lock.acquire()
    d[index] = "new"
    print "in process"+str(index)
    print d
    lock.release()
    return None

def dispatcher():
    i=1
    d={}
    mp = Manager()
    d = mp.dict()
    d[1] = "a"
    d[2] = "b"
    d[3] = "c"
    d[4] = "d"
    d[5] = "e"
    l = mp.list(range(10))
    for i in range(4):
        p = Process(target=launch_worker, args=(d,l,i))
        i = i+1
        p.start()
    return None

if __name__ == '__main__':
    dispatcher()

首先处理过程1时的错误

in process0
{0: 'new', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'}
Process Process-3:
Traceback (most recent call last):
  File "/usr/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap
    self.run()
  File "/usr/lib/python2.6/multiprocessing/process.py", line 88, in run
    self._target(*self._args, **self._kwargs)
  File "dispatcher.py", line 10, in launch_worker
    d[index] = "new"
  File "<string>", line 2, in __setitem__
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 722, in _callmethod
    self._connect()
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 709, in _connect
    conn = self._Client(self._token.address, authkey=self._authkey)
  File "/usr/lib/python2.6/multiprocessing/connection.py", line 143, in Client
    c = SocketClient(address)
  File "/usr/lib/python2.6/multiprocessing/connection.py", line 263, in SocketClient
    s.connect(address)
  File "<string>", line 1, in connect
error: [Errno 2] No such file or directory

首先处理过程2时的错误

Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap
    self.run()
  File "/usr/lib/python2.6/multiprocessing/process.py", line 88, in run
    self._target(*self._args, **self._kwargs)
  File "dispatcher.py", line 10, in launch_worker
    d[index] = "new"
  File "<string>", line 2, in __setitem__
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 722, in _callmethod
    self._connect()
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 709, in _connect
    conn = self._Client(self._token.address, authkey=self._authkey)
  File "/usr/lib/python2.6/multiprocessing/connection.py", line 150, in Client
    deliver_challenge(c, authkey)
  File "/usr/lib/python2.6/multiprocessing/connection.py", line 373, in deliver_challenge
    response = connection.recv_bytes(256)        # reject large message
IOError: [Errno 104] Connection reset by peer

1 个答案:

答案 0 :(得分:1)

您的工作人员修改的字典是由调度过程管理的共享对象;工人对该对象的修改要求他们与调度过程进行通信。您看到的错误来自于您的调度程序在启动它们之后不等待工作进程的事实;它太快退出了,所以它们可能不存在,以便在需要时进行通信。

尝试更新共享字典的第一个或两个工作程序可能会成功,因为当他们修改共享字典时,包含Manager实例的进程可能仍然存在(例如,它可能仍在创建过程中)进一步的工人)。因此,在您的示例中,您会看到某些成功输出。但是管理过程很快就会退出,下一个尝试修改的工作人员将会失败。 (您看到的错误消息是典型的进程间通信失败尝试;如果您再运行几次,您可能还会看到EOF错误。)

您需要做的是调用join对象上的Process方法,以等待每个对象退出。以下对dispatcher的修改显示了基本概念:

def dispatcher():
    mp = Manager()
    d = mp.dict()
    d[1] = "a"
    d[2] = "b"
    d[3] = "c"
    d[4] = "d"
    d[5] = "e"
    procs = []
    for i in range(4):
        p = Process(target=launch_worker, args=(d,i))
        procs.append(p)
        p.start()
    for p in procs:
        p.join()