Python多处理 - AssertionError:只能加入子进程

时间:2016-06-08 02:22:51

标签: python python-2.7 unix python-multiprocessing

我正在进行python mutliprocessing模块的第一次尝试,我遇到了一些问题。我对线程模块非常熟悉,但我需要确保正在执行的进程并行运行。

这是我正在尝试做的概述。请忽略未声明的变量/函数之类的内容,因为我无法完全粘贴代码。

import multiprocessing
import time

def wrap_func_to_run(host, args, output):
    output.append(do_something(host, args))
    return

def func_to_run(host, args):
    return do_something(host, args)

def do_work(server, client, server_args, client_args):
    server_output = func_to_run(server, server_args)
    client_output = func_to_run(client, client_args)
    #handle this output and return a result
    return result

def run_server_client(server, client, server_args, client_args, server_output, client_output):
    server_process = multiprocessing.Process(target=wrap_func_to_run, args=(server, server_args, server_output))
    server_process.start()  
    client_process = multiprocessing.Process(target=wrap_func_to_run, args=(client, client_args, client_output))
    client_process.start()
    server_process.join()
    client_process.join()
    #handle the output and return some result    

def run_in_parallel(server, client):
    #set up commands for first process
    server_output = client_output = []
    server_cmd = "cmd"
    client_cmd = "cmd"
    process_one = multiprocessing.Process(target=run_server_client, args=(server, client, server_cmd, client_cmd, server_output, client_output))
    process_one.start()
    #set up second process to run - but this one can run here
    result = do_work(server, client, "some server args", "some client args")
    process_one.join()
    #use outputs above and the result to determine result
    return final_result

def main():
    #grab client
    client = client()
    #grab server
    server = server()
    return run_in_parallel(server, client)

if __name__ == "__main__":
    main()

这是我得到的错误:

Error in sys.exitfunc:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/usr/lib64/python2.7/multiprocessing/util.py", line 319, in _exit_function
    p.join()
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 143, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process

我已经尝试了很多不同的东西来解决这个问题,但我的感觉是我使用这个模块的方式有问题。

编辑:

所以我创建了一个文件,通过模拟客户端/服务器和他们的工作来重现这一点 - 我也错过了一个重要的观点,那就是我在unix中运行它。另一个重要信息是do_work在我的实际案例中涉及使用os.fork()。如果不使用os.fork()我无法重现错误,所以我假设存在问题。在我的真实案例中,代码的那部分不是我的,所以我把它当成黑盒子(可能是我的错误)。无论如何这里是重现的代码 -

#!/usr/bin/python

import multiprocessing
import time
import os
import signal
import sys

class Host():
    def __init__(self):
        self.name = "host"

    def work(self):
        #override - use to simulate work
        pass

class Server(Host):
    def __init__(self):
        self.name = "server"

    def work(self):
        x = 0
        for i in range(10000):
            x+=1
        print x
        time.sleep(1)

class Client(Host):
    def __init__(self):
        self.name = "client"

    def work(self):
        x = 0
        for i in range(5000):
            x+=1
        print x
        time.sleep(1)

def func_to_run(host, args):
    print host.name + " is working"
    host.work()
    print host.name + ": " + args
    return "done"

def do_work(server, client, server_args, client_args):
    print "in do_work"
    server_output = client_output = ""
    child_pid = os.fork()
    if child_pid == 0:
        server_output = func_to_run(server, server_args)
        sys.exit(server_output)
    time.sleep(1)

    client_output = func_to_run(client, client_args)
    # kill and wait for server to finish
    os.kill(child_pid, signal.SIGTERM)
    (pid, status) = os.waitpid(child_pid, 0)

    return (server_output == "done" and client_output =="done")

def run_server_client(server, client, server_args, client_args):
    server_process = multiprocessing.Process(target=func_to_run, args=(server, server_args))
    print "Starting server process"
    server_process.start()
    client_process = multiprocessing.Process(target=func_to_run, args=(client, client_args))
    print "Starting client process"
    client_process.start()
    print "joining processes"
    server_process.join()
    client_process.join()
    print "processes joined and done"

def run_in_parallel(server, client):
    #set up commands for first process
    server_cmd = "server command for run_server_client"
    client_cmd = "client command for run_server_client"
    process_one = multiprocessing.Process(target=run_server_client, args=(server, client, server_cmd, client_cmd))
    print "Starting process one"
    process_one.start()
    #set up second process to run - but this one can run here
    print "About to do work"
    result = do_work(server, client, "server args from do work", "client args from do work")
    print "Joining process one"
    process_one.join()
    #use outputs above and the result to determine result
    print "Process one has joined"
    return result

def main():
    #grab client
    client = Client()
    #grab server
    server = Server()
    return run_in_parallel(server, client)

if __name__ == "__main__":
    main()

如果我在os.fork()中删除do_work的使用,我就不会收到错误,而且代码的行为与我之前预期的一样(除了我已经接受的输出通过)因为我的错误/误解)。我可以将旧代码更改为不使用os.fork(),但我也想知道为什么会导致这个问题以及是否有可行的解决方案。

编辑2:

我开始研究一个在接受的答案之前省略os.fork()的解决方案。这就是我对可以完成的模拟工作量的调整 -

#!/usr/bin/python

import multiprocessing
import time
import os
import signal
import sys
from Queue import Empty

class Host():
    def __init__(self):
        self.name = "host"

    def work(self, w):
        #override - use to simulate work
        pass

class Server(Host):
    def __init__(self):
        self.name = "server"

    def work(self, w):
        x = 0
        for i in range(w):
            x+=1
        print x
        time.sleep(1)

class Client(Host):
    def __init__(self):
        self.name = "client"

    def work(self, w):
        x = 0
        for i in range(w):
            x+=1
        print x
        time.sleep(1)

def func_to_run(host, args, w, q):
    print host.name + " is working"
    host.work(w)
    print host.name + ": " + args
    q.put("ZERO")
    return "done"

def handle_queue(queue):
    done = False
    results = []
    return_val = 0
    while not done:
        #try to grab item from Queue
        tr = None
        try:
            tr = queue.get_nowait()
            print "found element in queue"
            print tr
        except Empty:
            done = True
        if tr is not None:
            results.append(tr)
    for el in results:
        if el != "ZERO":
            return_val = 1
    return return_val

def do_work(server, client, server_args, client_args):
    print "in do_work"
    server_output = client_output = ""
    child_pid = os.fork()
    if child_pid == 0:
        server_output = func_to_run(server, server_args)
        sys.exit(server_output)
    time.sleep(1)

    client_output = func_to_run(client, client_args)
    # kill and wait for server to finish
    os.kill(child_pid, signal.SIGTERM)
    (pid, status) = os.waitpid(child_pid, 0)

    return (server_output == "done" and client_output =="done")



def run_server_client(server, client, server_args, client_args, w, mq):
    local_queue = multiprocessing.Queue()
    server_process = multiprocessing.Process(target=func_to_run, args=(server, server_args, w, local_queue))
    print "Starting server process"
    server_process.start()
    client_process = multiprocessing.Process(target=func_to_run, args=(client, client_args, w, local_queue))
    print "Starting client process"
    client_process.start()
    print "joining processes"
    server_process.join()
    client_process.join()
    print "processes joined and done"
    if handle_queue(local_queue) == 0:
        mq.put("ZERO")

def run_in_parallel(server, client):
    #set up commands for first process
    master_queue = multiprocessing.Queue()
    server_cmd = "server command for run_server_client"
    client_cmd = "client command for run_server_client"
    process_one = multiprocessing.Process(target=run_server_client, args=(server, client, server_cmd, client_cmd, 400000000, master_queue))
    print "Starting process one"
    process_one.start()
    #set up second process to run - but this one can run here
    print "About to do work"
    #result = do_work(server, client, "server args from do work", "client args from do work")
    run_server_client(server, client, "server args from do work", "client args from do work", 5000, master_queue)
    print "Joining process one"
    process_one.join()
    #use outputs above and the result to determine result
    print "Process one has joined"
    return_val = handle_queue(master_queue)
    print return_val
    return return_val

def main():
    #grab client
    client = Client()
    #grab server
    server = Server()
    val = run_in_parallel(server, client)
    if val:
        print "failed"
    else:
        print "passed"
    return val

if __name__ == "__main__":
    main()

此代码有一些经过调整的打印输出,只是为了确切了解发生了什么。我使用了multiprocessing.Queue来存储和共享进程中的输出,然后返回到我要处理的主线程中。我认为这解决了我的问题的python部分,但我正在处理的代码中仍然存在一些问题。我唯一可以说的是,相当于func_to_run涉及通过ssh发送命令并抓住任何错误以及输出。出于某种原因,这对于执行时间较短的命令非常适用,但对于具有大得多的执行时间/输出的命令则不太好。我试着在我的代码中使用截然不同的工作值来模拟这个,但是无法重现类似的结果。

编辑3 我正在使用的库代码(再次不是我的)使用Popen.wait()作为ssh命令,我只读了这个:

  

Popen.wait()   等待子进程终止。设置并返回returncode属性。

     

警告当使用stdout = PIPE和/或stderr = PIPE并且>子进程为管道生成足够的输出以阻止等待> OS管道缓冲区接受更多数据时,这将会死锁。使用communic()来避免这种情况。

我将代码调整为不缓冲,只是在接收时打印,一切正常。

2 个答案:

答案 0 :(得分:3)

  

我可以将旧代码更改为不使用public void mouseMoved(MouseEvent e) { mouseX = e.getX() / game.getScale(); mouseY = e.getY() / game.getScale(); } ,但我也想知道为什么会导致此问题以及是否有可行解决方案。

理解问题的关键是确切知道os.fork()的作用。 CPython docs表示"分叉子进程。"但这假设您了解C库调用fork()

这是glibc的手册中所说的:

  

fork()通过复制调用进程来创建新进程。新进程(称为子进程)与调用进程完全相同,称为父进程,但以下几点除外:...

它基本上就像你拿走了你的程序并制作了一个程序状态的副本(堆,堆栈,指令指针等),它们的差异很小,让它独立于原来执行。当此子进程自然退出时,它将使用fork(),这将触发exit()模块注册的atexit()个处理程序。

你能做些什么来避免它?

  • 省略multiprocessing:改为使用os.fork(),就像您正在探索
  • 一样 执行multiprocessing后,
  • 可能有效:import multiprocessing,只在必要时才在孩子或父母身上。
  • 在子级中使用_exit()(CPython docs状态,"注意退出的标准方法是sys.exit(n)._ exit()通常只应在fork之后的子进程中使用()"。)

https://docs.python.org/2/library/os.html#os._exit

答案 1 :(得分:0)

在我看来,你有一次太多了。我不会从run_in_parallel开始,但只是用正确的参数调用run_server_client,因为它们会在内部进行操作。