无法停止\多处理生成的所有进程.Pool

时间:2016-04-04 13:20:46

标签: python python-3.x multiprocessing

我需要在出现任何错误\异常时停止\ kill所有进程。我发现在StackOwerflow解决方案中使用psutil来终止所有进程,但我不时会遇到问题 - 当psutil杀死子进程和主进程时,新进程可能会启动并且代码会继续执行。

import psutil

class MyClass:
    parent_pid = 0
    ids_list = range(300)

    def main(self):
        self.parent_pid = os.getpid()
        pool = multiprocessing.Pool(3)

        for osm_id in self.ids_list:
            pool.apply_async(self.handle_country_or_region,  
                             kwds=dict(country_id=osm_id),
                             error_callback=self.kill_proc_tree)

        pool.close()
        pool.join()

    def kill_proc_tree(self, including_parent=True):
        parent = psutil.Process(self.parent_pid)
        children = parent.children(recursive=True)

        for child in children:
            child.kill()
        psutil.wait_procs(children, timeout=5)

        if including_parent:
            parent.kill()
            parent.wait(5)

    def handle_country_or_region(self, country_id=None, queue=None):
        pass
        # here I do some task

似乎我需要终止池而不是终止进程,但在这种情况下,如果我这样做

pool.close()
pool.terminate()
pool.join()

我的终端停止做任何事情,新行完全为空(即没有">>>")并且没有任何反应。

理想情况下,我希望有下一个流程:如果有任何错误\例外,请停止\ kill所有代码执行并返回终端中的交互式提示。

任何人都可以帮助我让它正常工作吗? 我使用Python 3.5和Ubuntu 15.10

1 个答案:

答案 0 :(得分:1)

解决方案非常简单 - 将“杀手”功能放在主要内容中。

完整代码如下:

class MyClass:
    ids_list = range(300)

    def main(self):
        pool = multiprocessing.Pool(3)

        def kill_pool(err_msg):
            print(err_msg)
            pool.terminate()

        for osm_id in self.ids_list:
            pool.apply_async(self.handle_country_or_region,     
                             kwds=dict(country_id=osm_id),
                             error_callback=kill_pool)

        pool.close()
        pool.join()

    def handle_country_or_region(self, country_id=None, queue=None):
        pass  # here I do some task

如果有人需要使用queue,则下面是代码的扩展变体,它显示了如何以正确的方式完成queue,避免了僵尸进程:

import pickle
import os
import multiprocessing

class MyClass:
    ids_list = range(300)
    folder = os.path.join(os.getcwd(), 'app_geo')
    STOP_TOKEN = 'stop queue'

    def main(self):

        # >>> Queue part shared between processes <<<
        manager = multiprocessing.Manager()
        remove_id_queue = manager.Queue()

        remove_id_process = multiprocessing.Process(target=self.remove_id_from_file,
                                                    args=(remove_id_queue,))
        remove_id_process.start()
        # >>> End of queue part <<<

        pool = multiprocessing.Pool(3)

        def kill_pool(err_msg):
            print(err_msg)
            pool.terminate()

        for osm_id in self.ids_list:
            pool.apply_async(self.handle_country_or_region,     
                             kwds=dict(country_id=osm_id),
                             error_callback=kill_pool)

        pool.close()
        pool.join()

        # >>> Anti-zombie processes queue part <<<
        remove_id_queue.put(self.STOP_TOKEN)
        remove_id_process.join()
        manager.shutdown()
        # >>> End

    def handle_country_or_region(self, country_id=None, queue=None):
        # here I do some task
        queue.put(country_id)

    def remove_id_from_file(self, some_queue):
        while True:
            osm_id = some_queue.get()
            if osm_id == self.STOP_TOKEN:
                return
            self.ids_list.remove(osm_id)
            with open(self.folder + '/ids_list.pickle', 'wb') as f:
                pickle.dump(self.ids_list, f)