python多处理pickling / manager / misc错误(来自PMOTW)

时间:2017-09-14 20:55:49

标签: python multiprocessing pickle

我在通过Windows在Eclipse上运行以下代码时遇到了一些麻烦。代码来自Doug Hellman

import random
import multiprocessing
import time


class ActivePool:

    def __init__(self):
        super(ActivePool, self).__init__()
        self.mgr = multiprocessing.Manager()
        self.active = self.mgr.list()
        self.lock = multiprocessing.Lock()

    def makeActive(self, name):
        with self.lock:
            self.active.append(name)

    def makeInactive(self, name):
        with self.lock:
            self.active.remove(name)

    def __str__(self):
        with self.lock:
            return str(self.active)


def worker(s, pool):
    name = multiprocessing.current_process().name
    with s:
        pool.makeActive(name)
        print('Activating {} now running {}'.format(
            name, pool))
        time.sleep(random.random())
        pool.makeInactive(name)


if __name__ == '__main__':
    pool = ActivePool()
    s = multiprocessing.Semaphore(3)
    jobs = [
        multiprocessing.Process(
            target=worker,
            name=str(i),
            args=(s, pool),
        )
        for i in range(10)
    ]

    for j in jobs:
        j.start()

    for j in jobs:
        j.join()
        print('Now running: %s' % str(pool))

我收到以下错误,我认为是由于将pool作为参数传递给Process时出现了一些酸洗问题。

Traceback (most recent call last):
  File "E:\Eclipse_Workspace\CodeExamples\FromCodes\CodeTest.py", line 50, in <module>
    j.start()
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 939, in reduce_pipe_connection
    dh = reduction.DupHandle(conn.fileno(), access)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 170, in fileno
    self._check_closed()
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 136, in _check_closed
    raise OSError("handle is closed")
OSError: handle is closed
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\spawn.py", line 99, in spawn_main
    new_handle = reduction.steal_handle(parent_pid, pipe_handle)
  File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\reduction.py", line 87, in steal_handle
    _winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
PermissionError: [WinError 5] Access is denied

类似的问题answer似乎暗示我在顶层使用函数调用初始化pool,但我不知道如何将其应用于此示例。我是否在ActivePool初始化worker?这似乎打败了海尔曼的榜样。

另一个answer建议我使用__getstate____setstate__删除不可解决的对象并在取消渲染时重建它们,但我不知道使用代理对象执行此操作的好方法经理,我实际上不知道不可解决的对象是什么。

有什么方法可以让这个例子以最小的变化工作?我真的希望了解幕后发生的事情。谢谢!

修改 - 解决问题:

事后看来,酸洗问题非常明显。 ActivePool的__init__包含一个看似不可销售的Manager()对象。如果我们删除self.mgr并在一行中初始化列表ProxyObject,代码将按照Hellman的示例正常运行:

def __init__(self):
        super(ActivePool, self).__init__()
        self.active = multiprocessing.Manager().list()
        self.lock = multiprocessing.Lock()

1 个答案:

答案 0 :(得分:0)

  

评论:'join()'在Hellman示例中,但我忘了将其添加到代码段中。还有其他想法吗?

我正在运行Linux并且它按预期工作,Windows表现不同的读取understanding-multiprocessing-shared-memory-management-locks-and-queues-in-pyt

要确定args=(s, pool)哪个参数引发错误,请删除一个并将其用作global 变化:

def worker(s):
    ...

        args=(s,),
  

注意:无需将multiprocessing.Manager().list()Lock()括起来。
  这不是你错误的罪魁祸首。

  

问题:有什么方法可以让这个例子在最小的变化下工作吗?

您的__main__进程终止,因此所有已启动的进程都会在不可预测的执行位置死亡。最后添加简单的.join()__main__等待所有进程完成:

    for j in jobs:
        j.join()

    print('EXIT __main__')

使用Python测试:3.4.2