如何在Python中重用Popen的中间结果?

时间:2012-11-22 03:13:42

标签: python linux process subprocess pipe

代码是这样的:

from subprocess import Popen, PIPE


p1 = Popen("command1", stdout = PIPE)
p2 = Popen("command2", stdin = p1.stdout, stdout = PIPE)
result_a = p2.communicate()[0]

p1_again = Popen("command1", stdout = PIPE)
p3 = Popen("command3", stdin = p1_again.stdout, stdout = PIPE)
result_b = p3.communicate()[0]

with open("test") as tf:
    p1_again_again = Popen("command1", stdout = tf)
    p1_again_again.communicate()

不好的部分是:

command1执行了三次,因为当我使用commnnicate一次时,该stdout对象的Popen无法再次使用。我只是想知道是否有一种方法可以重用PIPE的中间结果。

有没有人对如何使这些代码更好(更好的性能以及更少的代码行)有想法?谢谢!

2 个答案:

答案 0 :(得分:3)

这是一个有效的解决方案。我已经为cmd1,cmd2,cmd3放置了示例命令,以便您可以运行它。它只接受第一个命令的输出,并在一个命令中将其大写,并在另一个命令中将其小写。

<强>码

from subprocess import Popen, PIPE, check_output
from tempfile import TemporaryFile

cmd1 = ['echo', 'Hi']
cmd2 = ['tr', '[:lower:]', '[:upper:]']
cmd3 = ['tr', '[:upper:]', '[:lower:]']

with TemporaryFile() as f:
    p = Popen(cmd1, stdout=f)
    ret_code = p.wait()
    f.flush()
    f.seek(0)
    out2 = Popen(cmd2, stdin=f, stdout=PIPE).stdout.read()
    f.seek(0)
    out3 = Popen(cmd3, stdin=f, stdout=PIPE).stdout.read()
    print out2, out3

<强>输出

HI
hi

解决方案中需要注意的一些事项。 tempfile模块在​​需要使用临时文件时始终是一个很好的方法,一旦with语句退出,它将自动删除临时文件作为清理,即使有一些io异常抛出了块。 cmd1运行一次并输出到临时文件,一个调用wait()方法以确保所有执行都已完成,然后我们每次都搜索(0),这样当我们调用f上的read()方法时它又回来了在文件的开头。作为参考,问题Saving stdout from subprocess.Popen to file帮助我获得了解决方案的第一部分。

答案 1 :(得分:0)

如果您可以阅读内存中command1的所有输出,然后依次运行command2command3

#!/usr/bin/env python
from subprocess import Popen, PIPE, check_output as qx

cmd1_output = qx(['ls']) # get all output

# run commands in sequence
results = [Popen(cmd, stdin=PIPE, stdout=PIPE).communicate(cmd1_output)[0]
           for cmd in [['cat'], ['tr', 'a-z', 'A-Z']]]

如果command1生成一个巨大的输出,无法在内存中@Marwan Alsabbagh suggested,您可以先写入临时文件:

#!/usr/bin/env python
import tempfile
from subprocess import check_call, check_output as qx

with tempfile.TemporaryFile() as file: # deleted automatically on closing
    # run command1, wait for completion
    check_call(['ls'], stdout=file)

    # run commands in sequence
    results = []
    for cmd in [['cat'], ['tr', 'a-z', 'A-Z']]:
        file.seek(0)
        results.append(qx(cmd, stdin=file))

要并行处理子进程的输入/输出,可以使用threading

#!/usr/bin/env python3
from contextlib import ExitStack  # pip install contextlib2 (stdlib since 3.3)
from subprocess import Popen, PIPE
from threading  import Thread

def tee(fin, *files):
    try:
        for chunk in iter(lambda: fin.read(1 << 10), b''):
            for f in files:  # fan out
                f.write(chunk)
    finally:
        for f in (fin,) + files:
            try:
                f.close()
            except OSError:
                pass

with ExitStack() as stack:
    # run commands asynchronously
    source_proc = Popen(["command1", "arg1"], stdout=PIPE)
    stack.callback(source_proc.wait)
    stack.callback(source_proc.stdout.close)

    processes = []
    for command in [["tr", "a-z", "A-Z"], ["cat"]]:
        processes.append(Popen(command, stdin=PIPE, stdout=PIPE))
        stack.callback(processes[-1].wait)
        stack.callback(processes[-1].stdout.close) # use .terminate()
        stack.callback(processes[-1].stdin.close)  # if it doesn't kill it

    fout = open("test.txt", "wb")
    stack.callback(fout.close)

    # fan out source_proc's output
    Thread(target=tee, args=([source_proc.stdout, fout] +
                             [p.stdin for p in processes])).start()

    # collect results in parallel
    results = [[] for _ in range(len(processes))]
    threads = [Thread(target=r.extend, args=[iter(p.stdout.readline, b'')])
               for p, r in zip(processes, results)]
    for t in threads: t.start()
    for t in threads: t.join() # wait for completion

我在这里使用ExitStack进行适当的清理,以防万一。