我目前正在使用subprocess.Popen(cmd, shell=TRUE)
我对Python很新,但感觉就像应该有一些api让我做类似的事情:
subprocess.Popen(cmd, shell=TRUE, postexec_fn=function_to_call_on_exit)
我这样做是为了function_to_call_on_exit
可以基于知道cmd已经退出(例如保持当前正在运行的外部进程数量的计数)来做某事
我认为我可以在一个将线程与Popen.wait()
方法结合起来的类中简单地包装子进程,但是因为我还没有在Python中进行线程化,看起来这对于API来说可能很常见为了存在,我以为我会先尝试找一个。
提前致谢:)
答案 0 :(得分:58)
你是对的 - 没有很好的API。你的第二点也是对的 - 设计一个使用线程为你做这个功能的功能非常简单。
import threading
import subprocess
def popenAndCall(onExit, popenArgs):
"""
Runs the given args in a subprocess.Popen, and then calls the function
onExit when the subprocess completes.
onExit is a callable object, and popenArgs is a list/tuple of args that
would give to subprocess.Popen.
"""
def runInThread(onExit, popenArgs):
proc = subprocess.Popen(*popenArgs)
proc.wait()
onExit()
return
thread = threading.Thread(target=runInThread, args=(onExit, popenArgs))
thread.start()
# returns immediately after the thread starts
return thread
即使在Python中使用线程也很容易,但请注意,如果onExit()的计算成本很高,那么你需要将它放在一个单独的进程中,而不是使用多处理(这样GIL不会减慢你的程序速度)。它实际上非常简单 - 您基本上只需将所有对threading.Thread
的调用替换为multiprocessing.Process
,因为它们(几乎)遵循相同的API。
答案 1 :(得分:15)
Python 3.2中有concurrent.futures
个模块(可通过pip install futures
获得较旧的Python< 3.2):
pool = Pool(max_workers=1)
f = pool.submit(subprocess.call, "sleep 2; echo done", shell=True)
f.add_done_callback(callback)
将在调用f.add_done_callback()
。
import logging
import subprocess
# to install run `pip install futures` on Python <3.2
from concurrent.futures import ThreadPoolExecutor as Pool
info = logging.getLogger(__name__).info
def callback(future):
if future.exception() is not None:
info("got exception: %s" % future.exception())
else:
info("process returned %d" % future.result())
def main():
logging.basicConfig(
level=logging.INFO,
format=("%(relativeCreated)04d %(process)05d %(threadName)-10s "
"%(levelname)-5s %(msg)s"))
# wait for the process completion asynchronously
info("begin waiting")
pool = Pool(max_workers=1)
f = pool.submit(subprocess.call, "sleep 2; echo done", shell=True)
f.add_done_callback(callback)
pool.shutdown(wait=False) # no .submit() calls after that point
info("continue waiting asynchronously")
if __name__=="__main__":
main()
$ python . && python3 .
0013 05382 MainThread INFO begin waiting
0021 05382 MainThread INFO continue waiting asynchronously
done
2025 05382 Thread-1 INFO process returned 0
0007 05402 MainThread INFO begin waiting
0014 05402 MainThread INFO continue waiting asynchronously
done
2018 05402 Thread-1 INFO process returned 0
答案 2 :(得分:12)
我修改了Daniel G的答案,只是简单地传递subprocess.Popen args和kwargs而不是单独的tupple / list,因为我想在subprocess.Popen中使用关键字参数。
就我而言,我有一个方法postExec()
我希望在subprocess.Popen('exe', cwd=WORKING_DIR)
之后运行
使用下面的代码,它只会变成popenAndCall(postExec, 'exe', cwd=WORKING_DIR)
import threading
import subprocess
def popenAndCall(onExit, *popenArgs, **popenKWArgs):
"""
Runs a subprocess.Popen, and then calls the function onExit when the
subprocess completes.
Use it exactly the way you'd normally use subprocess.Popen, except include a
callable to execute as the first argument. onExit is a callable object, and
*popenArgs and **popenKWArgs are simply passed up to subprocess.Popen.
"""
def runInThread(onExit, popenArgs, popenKWArgs):
proc = subprocess.Popen(*popenArgs, **popenKWArgs)
proc.wait()
onExit()
return
thread = threading.Thread(target=runInThread,
args=(onExit, popenArgs, popenKWArgs))
thread.start()
return thread # returns immediately after the thread starts
答案 3 :(得分:6)
我遇到了同样的问题,并使用multiprocessing.Pool
解决了这个问题。涉及两个hacky技巧:
结果是在完成时使用回调执行的一个函数
def sub(arg):
print arg #prints [1,2,3,4,5]
return "hello"
def cb(arg):
print arg # prints "hello"
pool = multiprocessing.Pool(1)
rval = pool.map_async(sub,([[1,2,3,4,5]]),callback =cb)
(do stuff)
pool.close()
就我而言,我希望调用也是非阻塞的。工作得很漂亮
答案 4 :(得分:2)
我受到Daniel G.的启发并回答并实现了一个非常简单的用例 - 在我的工作中,我经常需要使用不同的参数重复调用相同的(外部)进程。我已经破解了确定每个特定呼叫何时完成的方法,但现在我有一个更清晰的方式来发出回调。
我喜欢这个实现,因为它很简单,但它允许我向多个处理器发出异步调用(注意我使用multiprocessing
而不是threading
)并在完成时收到通知。
我测试了示例程序,效果很好。请随意编辑并提供反馈。
import multiprocessing
import subprocess
class Process(object):
"""This class spawns a subprocess asynchronously and calls a
`callback` upon completion; it is not meant to be instantiated
directly (derived classes are called instead)"""
def __call__(self, *args):
# store the arguments for later retrieval
self.args = args
# define the target function to be called by
# `multiprocessing.Process`
def target():
cmd = [self.command] + [str(arg) for arg in self.args]
process = subprocess.Popen(cmd)
# the `multiprocessing.Process` process will wait until
# the call to the `subprocess.Popen` object is completed
process.wait()
# upon completion, call `callback`
return self.callback()
mp_process = multiprocessing.Process(target=target)
# this call issues the call to `target`, but returns immediately
mp_process.start()
return mp_process
if __name__ == "__main__":
def squeal(who):
"""this serves as the callback function; its argument is the
instance of a subclass of Process making the call"""
print "finished %s calling %s with arguments %s" % (
who.__class__.__name__, who.command, who.args)
class Sleeper(Process):
"""Sample implementation of an asynchronous process - define
the command name (available in the system path) and a callback
function (previously defined)"""
command = "./sleeper"
callback = squeal
# create an instance to Sleeper - this is the Process object that
# can be called repeatedly in an asynchronous manner
sleeper_run = Sleeper()
# spawn three sleeper runs with different arguments
sleeper_run(5)
sleeper_run(2)
sleeper_run(1)
# the user should see the following message immediately (even
# though the Sleeper calls are not done yet)
print "program continued"
示例输出:
program continued
finished Sleeper calling ./sleeper with arguments (1,)
finished Sleeper calling ./sleeper with arguments (2,)
finished Sleeper calling ./sleeper with arguments (5,)
以下是sleeper.c
的源代码 - 我的样本“耗费时间”的外部流程
#include<stdlib.h>
#include<unistd.h>
int main(int argc, char *argv[]){
unsigned int t = atoi(argv[1]);
sleep(t);
return EXIT_SUCCESS;
}
编译为:
gcc -o sleeper sleeper.c
答案 5 :(得分:0)
AFAIK没有这样的API,至少不在subprocess
模块中。你需要自己滚动一些东西,可能是使用线程。
答案 6 :(得分:0)
从3.2开始,并发。(https://docs.python.org/3/library/concurrent.futures.html)中还提供了ProcesPoolExecutor。用法与上述ThreadPoolExecutor相同。通过executor.add_done_callback()附加退出出口回调。