Python多处理和共享计数器

时间:2010-01-17 10:26:43

标签: python multiprocessing

我遇到了多处理模块的麻烦。我正在使用一个带有map方法的工作池来从大量文件加载数据,并且每个文件都使用自定义函数分析数据。每次处理文件时,我都希望更新一个计数器,以便我可以跟踪要处理的文件数量。 以下是示例代码:

def analyze_data( args ):
    # do something 
    counter += 1
    print counter


if __name__ == '__main__':

    list_of_files = os.listdir(some_directory)

    global counter
    counter = 0

    p = Pool()
    p.map(analyze_data, list_of_files)

我无法找到解决方案。

5 个答案:

答案 0 :(得分:54)

问题是你的进程之间没有共享counter变量:每个单独的进程都在创建它自己的本地实例并递增它。

有关您可以在流程之间共享状态的一些技巧,请参阅文档的this section。在您的情况下,您可能希望在工作人员之间共享Value实例

这是您的示例的工作版本(带有一些虚拟输入数据)。请注意,它使用的是全局值,我在实践中会尽量避免使用:

from multiprocessing import Pool, Value
from time import sleep

counter = None

def init(args):
    ''' store the counter for later use '''
    global counter
    counter = args

def analyze_data(args):
    ''' increment the global counter, do something with the input '''
    global counter
    # += operation is not atomic, so we need to get a lock:
    with counter.get_lock():
        counter.value += 1
    print counter.value
    return args * 10

if __name__ == '__main__':
    #inputs = os.listdir(some_directory)

    #
    # initialize a cross-process counter and the input lists
    #
    counter = Value('i', 0)
    inputs = [1, 2, 3, 4]

    #
    # create the pool of workers, ensuring each one receives the counter 
    # as it starts. 
    #
    p = Pool(initializer = init, initargs = (counter, ))
    i = p.map_async(analyze_data, inputs, chunksize = 1)
    i.wait()
    print i.get()

答案 1 :(得分:29)

没有竞争条件错误的计数器类:

class Counter(object):
    def __init__(self):
        self.val = multiprocessing.Value('i', 0)

    def increment(self, n=1):
        with self.val.get_lock():
            self.val.value += n

    @property
    def value(self):
        return self.val.value

答案 2 :(得分:2)

更快的Counter类没有使用Value的内置锁两次

class Counter(object):
    def __init__(self, initval=0):
        self.val = multiprocessing.RawValue('i', initval)
        self.lock = multiprocessing.Lock()

    def increment(self):
        with self.lock:
            self.val.value += 1

    @property
    def value(self):
        return self.val.value

https://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing https://docs.python.org/2/library/multiprocessing.html#multiprocessing.sharedctypes.Value https://docs.python.org/2/library/multiprocessing.html#multiprocessing.sharedctypes.RawValue

答案 3 :(得分:1)

一个极端简单的示例,从jkp的答案改成了:

from multiprocessing import Pool, Value
from time import sleep

counter = Value('i', 0)
def f(x):
    global counter
    with counter.get_lock():
        counter.value += 1
    print("counter.value:", counter.value)
    sleep(1)
    return x

with Pool(4) as p:
    r = p.map(f, range(1000*1000))

答案 4 :(得分:0)

我正在PyQT5中处理进程栏,所以我将线程和池一起使用

import threading
import multiprocessing as mp
from queue import Queue

def multi(x):
    return x*x

def pooler(q):
    with mp.Pool() as pool:
    count = 0
    for i in pool.imap_unordered(ggg, range(100)):
        print(count, i)
        count += 1
        q.put(count)

def main():
    q = Queue()
    t = threading.Thread(target=thr, args=(q,))
    t.start()
    print('start')
    process = 0
    while process < 100:
        process = q.get()
        print('p',process)
if __name__ == '__main__':
    main()

我放入Qthread worker中,并且可以在可接受的延迟范围内工作