python多处理在终止发生时在子进程中执行代码

时间:2014-03-12 09:59:32

标签: exception python-2.7 return multiprocessing pysnmp

我想知道当父进程尝试终止子进程时是否有办法在子进程上运行某些代码。有没有办法可以写Exception

我的代码看起来像这样:

main_process.py

import Process from multiprocessing

def main():
    p1 = Process(target = child, args = (arg1, ))
    p1.start()
    p1.daemon = True
    #blah blah blah code here
    sleep(5)
    p1.terminate()

def child(arg1):
    #blah blah blah
    itemToSend = {}
    #more blah blah
    snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
    try:
        snmpEngine.transportDispatcher.runDispatcher()
    except:
        snmpEngine.transportDispatcher.closeDispatcher()
        raise

由于作业永远不会完成,子进程会继续运行。我必须从父进程终止它,因为子进程永远不会自行终止。但是,我希望在子进程终止之前将itemToSend发送到父进程。我能以某种方式将return转换为父进程吗?

更新:让我解释一下runDispatcher()模块pysnmp的工作原理

def runDispatcher():
    while jobsArePending():  # jobs are always pending because of jobStarted() function
        loop()

def jobStarted(jobId):
    if jobId in jobs:        #This way there's always 1 job remaining
        jobs[jobId] = jobs[jobId] + 1

这非常令人沮丧。而不是完成所有这些,是否可以自己编写一个snmp陷阱监听器?你能指出我正确的资源吗?

2 个答案:

答案 0 :(得分:2)

.runDispatcher()方法实际上调用了一个异步I / O引擎的主循环(asyncore / twisted),一旦没有活动的pysnmp'就会终止它。待定。

您可以通过注册自己的回调定时器函数使pysnmp调度程序与您的应用程序的其余部分配合,该函数将定期从mainloop调用。在您的回调函数中,您可以检查终止事件是否到达并重置pysnmp' job'是什么让pysnmp mainloop完成。

def timerCb(timeNow):
    if terminationRequestedFlag:  # this flag is raised by an event from parent process
        # use the same jobId as in jobStarted()
        snmpEngine.transportDispatcher.jobFinished(1)  

snmpEngine.transportDispatcher.registerTimerCbFun(timerCb)

那些pysnmp作业只是标志(例如代码中的' 1'),这意味着告诉I / O核心异步应用程序仍然需要此I / O核心来运行和服务它们。一旦最后一个潜在的许多应用程序对I / O核心操作不再感兴趣,主循环终止。

答案 1 :(得分:0)

如果子流程可以合作,那么您可以使用multiprocessing.Event通知孩子它应该退出,multiprocessing.Pipe可以用来向父母发送itemToSend

#!/usr/bin/env python
import logging
import multiprocessing as mp
from threading import Timer

def child(stopped_event, conn):
    while not stopped_event.wait(1):
        pass
    mp.get_logger().info("sending")
    conn.send({'tosend': 'from child'})
    conn.close()

def terminate(process, stopped_event, conn):
    stopped_event.set() # nudge child process
    Timer(5, do_terminate, [process]).start()
    try:
        print(conn.recv())  # get value from the child
        mp.get_logger().info("received")
    except EOFError:
        mp.get_logger().info("eof")

def do_terminate(process):
    if process.is_alive():
        mp.get_logger().info("terminating")
        process.terminate()

if __name__ == "__main__":
    mp.log_to_stderr().setLevel(logging.DEBUG)
    parent_conn, child_conn = mp.Pipe(duplex=False)
    event = mp.Event()
    p = mp.Process(target=child, args=[event, child_conn])
    p.start()
    child_conn.close() # child must be the only one with it opened
    Timer(3, terminate, [p, event, parent_conn]).start()

输出

[DEBUG/MainProcess] created semlock with handle 139845842845696
[DEBUG/MainProcess] created semlock with handle 139845842841600
[DEBUG/MainProcess] created semlock with handle 139845842837504
[DEBUG/MainProcess] created semlock with handle 139845842833408
[DEBUG/MainProcess] created semlock with handle 139845842829312
[INFO/Process-1] child process calling self.run()
[INFO/Process-1] sending
{'tosend': 'from child'}
[INFO/Process-1] process shutting down
[DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-1] running the remaining "atexit" finalizers
[INFO/MainProcess] received
[INFO/Process-1] process exiting with exitcode 0
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers