我可以在当前正在写入的文件上使用fdpexpect吗?

时间:2014-09-10 15:35:38

标签: python python-2.7 pexpect

我试图等到某些文本被写入Python中的实时日志文件。

fdpexect似乎是正确的,但它不会等待。一旦它到达文件的末尾就会终止。

我想知道fdpexpect是否支持这个并且我是否需要解决这个问题?

我的代码基本上是这样的:

创建spawn对象:

# we're not using pexpect.spawn because we want
# all the output to be written to the logfile in real time, 
# which spawn doesn't seem to support.

p = subprocess.Popen(command,
                     shell=shell,
                     stdout=spawnedLog.getFileObj(),
                     stderr=subprocess.STDOUT)
# give fdspawn the same file object we gave Popen
return (p, pexpect.fdpexpect.fdspawn(spawnedLog.getFileObj()))

等待某事:

pexpectObj.expect('something')

这基本上是在“'”之前立即退出的。事件发生在EOF错误中。

2 个答案:

答案 0 :(得分:2)

fdpexpect不适用于普通文件。对于管道和套接字,pexpect将始终从文件对象读取,直到达到EOF为止 - 对于管道和套接字,这种情况不会发生,直到连接实际关闭,但对于普通文件,它会在整个文件发生后立即发生读。它无法知道该文件是由另一个进程主动写入的。

您可以通过使用os.pipe创建管道,然后实现自己的tee功能,将您的进程的stdout写入该管道以及日志文件来解决此问题。 。这是一个似乎有用的小玩具示例:

from subprocess import Popen, PIPE, STDOUT
from threading  import Thread
import os
import pexpect.fdpexpect

# tee and teed_call are based on http://stackoverflow.com/a/4985080/2073595

def tee(infile, *files):
    """Print `infile` to `files` in a separate thread."""
    def fanout(infile, *files):
        for line in iter(infile.readline, ''):
            for f in files:
                f.write(line)
        infile.close()
    t = Thread(target=fanout, args=(infile,)+files)
    t.daemon = True
    t.start()
    return t

def teed_call(cmd_args, files, **kwargs):
    p = Popen(cmd_args,
              stdout=PIPE,
              stderr=STDOUT,
              **kwargs)
    threads = []
    threads.append(tee(p.stdout, *files))
    return (threads, p)

with open("log.txt", 'w') as logf:
    # Create pipes for unbuffered reading and writing
    rpipe, wpipe = os.pipe()
    rpipe = os.fdopen(rpipe, 'r', 0)
    wpipe = os.fdopen(wpipe, 'w', 0)

    # Have pexpect read from the readable end of the pipe
    pobj = pexpect.fdpexpect.fdspawn(rpipe)

    # Call some script, and tee output to our log file and
    # the writable end of the pipe.
    threads, p = teed_call(["./myscript.sh"], [wpipe, logf])

    # myscript.sh will print 'hey'
    pobj.expect("hey")

    # orderly shutdown/cleanup
    for t in threads: t.join()
    p.wait()
    rpipe.close()
    wpipe.close()

答案 1 :(得分:0)

dano的另一种方法是咬紧牙关并使用'tail -f'。

这有点笨拙,取决于你有'尾巴'。

p = subprocess.Popen(command,
                     shell=shell,
                     stdout=spawnedLog.getFileObj(),
                     stderr=subprocess.STDOUT)

# this seems really dumb, but in order to follow the log                                                                                                                                                     
# file and not have fdpexect quit because we encountered EOF                                                                                                                                                 
# we're going spawn *another* process to tail the log file                                                                                                                                                   
tailCommand = "tail -f %s" % spawnedLog.getPath()

# this is readonly, we're going to look at the output logfile                                                                                                                                                
# that's created                                                                                                                                                                                             
return (p, pexpect.spawn(tailCommand))