ProcessPoolExecutor日志记录无法在Windows上记录内部函数,但在Unix / Mac上无法记录

时间:2018-04-11 19:07:46

标签: python multiprocessing concurrent.futures

当我在Windows计算机上运行以下脚本时,我看不到来自log_pid函数的任何日志消息,但是当我在Unix / Mac上运行时,我没有看到。我之前已经读过,与Mac相比,Windows上的多处理方式不同,但我不清楚应该做些什么更改才能让这个脚本在Windows上运行。我正在运行Python 3.6。

import logging
import sys
from concurrent.futures import ProcessPoolExecutor
import os


def log_pid(x):
    logger.info('Executing on process: %s' % os.getpid())


def do_stuff():
    logger.info('this is the do stuff function.')
    with ProcessPoolExecutor(max_workers=4) as executor:
        executor.map(log_pid, range(0, 10))


def main():
    logger.info('this is the main function.')
    do_stuff()


if __name__ == '__main__':
    logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
    logger = logging.getLogger(__name__)

    logger.info('Start of script ...')

    main()

    logger.info('End of script ...')

1 个答案:

答案 0 :(得分:6)

Unix进程是通过fork策略创建的,其中子进程从父进程中克隆,并在父进行分叉时继续执行。

在Windows上完全不同:创建一个空白进程并启动一个新的Python解释器。然后,解释器将加载log_pid函数所在的模块并执行它。

这意味着新生成的子进程不会执行__main__部分。因此,不会创建logger对象,并且log_pid函数会相应地崩溃。您没有看到错误,因为您忽略了计算结果。尝试按如下方式修改逻辑。

def do_stuff():
    logger.info('this is the do stuff function.')
    with ProcessPoolExecutor(max_workers=4) as executor:
        iterator = executor.map(log_pid, range(0, 10))
        list(iterator)  # collect the results in a list

问题将变得明显。

Traceback (most recent call last):
  File "C:\Program Files (x86)\Python36-32\lib\concurrent\futures\process.py", line 175, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "C:\Program Files (x86)\Python36-32\lib\concurrent\futures\process.py", line 153, in _process_chunk
    return [fn(*args) for args in chunk]
  File "C:\Program Files (x86)\Python36-32\lib\concurrent\futures\process.py", line 153, in <listcomp>
    return [fn(*args) for args in chunk]
  File "C:\Users\cafama\Desktop\pool.py", line 8, in log_pid
    logger.info('Executing on process: %s' % os.getpid())
NameError: name 'logger' is not defined

处理进程池时(无论是concurrent.futures还是multiprocessing)总是收集计算结果,以避免出现混乱的静默错误。

要解决此问题,只需将logger创建移动到模块的顶层,一切都可以在所有平台上运行。

import logging
import sys
from concurrent.futures import ProcessPoolExecutor
import os

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logger = logging.getLogger(__name__)   

def log_pid(x):
    logger.info('Executing on process: %s' % os.getpid())

...