打印pool.map_async的进度

时间:2013-10-24 10:06:08

标签: python multiprocessing

我有以下功能

from multiprocessing import Pool
def do_comparison(tupl):
    x, y = tupl # unpack arguments
    return compare_clusters(x, y)

def distance_matrix(clusters, condensed=False):
    pool = Pool()
    values = pool.map_async(do_comparison, itertools.combinations(clusters, 2)).get()
    do stuff

是否可以打印pool.map_async(do_comparison, itertools.combinations(clusters, 2)).get()的进度? 我通过向do_comparison添加计数来尝试它,如此

count = 0
def do_comparison(tupl):
    global count
    count += 1
    if count % 1000 == 0:
        print count
    x, y = tupl # unpack arguments
    return compare_clusters(x, y)

但除了看起来不是一个好的解决方案之外,这些数字直到脚本结束才会打印出来。有没有办法做到这一点?

2 个答案:

答案 0 :(得分:2)

理查德的解决方案适用于少量工作,但出于某种原因,它似乎冻结了大量工作,我发现最好使用:

import multiprocessing
import time

def track_job(job, update_interval=3):
    while job._number_left > 0:
        print("Tasks remaining = {0}".format(
        job._number_left * job._chunksize))
        time.sleep(update_interval)



def hi(x): #This must be defined before `p` if we are to use in the interpreter
  time.sleep(x//2)
  return x

a = [x for x in range(50)]

p   = multiprocessing.Pool()

res = p.map_async(hi,a)

track_job(res)

答案 1 :(得分:1)

我跟踪进度如下:

import multiprocessing
import time

class PoolProgress:
  def __init__(self,pool,update_interval=3):
    self.pool            = pool
    self.update_interval = update_interval
  def track(self, job):
    task = self.pool._cache[job._job]
    while task._number_left>0:
      print("Tasks remaining = {0}".format(task._number_left*task._chunksize))
      time.sleep(self.update_interval)


def hi(x): #This must be defined before `p` if we are to use in the interpreter
  time.sleep(x//2)
  return x

a = list(range(50))

p   = multiprocessing.Pool()
pp  = PoolProgress(p)

res = p.map_async(hi,a)

pp.track(res)