python multiprocessing如何在处理结束时杀死侦听器

时间:2018-07-18 14:36:45

标签: python numpy for-loop multiprocessing

我正在使用python multiprocessing,其中许多工作人员正在做某项工作并将其输出排队给侦听器。然后,侦听器会将其输出附加到numpy数组中,如下所示:

def listener(q):
    global data

    while 1:
        m = q.get()
        if m == 'kill':
            print('FINISHED')
            # Save data to file
            break
        data = np.column_stack((data, m))

这是主要代码

data = np.empty([0,6])

manager = mp.Manager()

q = manager.Queue()

pool = mp.Pool(mp.cpu_count())

watcher = pool.apply_async(listener, (q,))

jobs = []

for i in range(100):
    job = pool.apply_async(process_data, (x1, x2, x3,))
    jobs.append(job)

for job in jobs:
    job.get()

q.put('kill')

pool.close()

process data是一个函数,用于将数组放入侦听器随后要从中提取的队列。问题在于,在运行代码print('FINISHED')时不会在填充data的情况下执行代码,因此我知道listener在工作。我也收到输出警告:

FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if m == 'kill':

尝试调试此问题,我注释了创建工作作业的过程,例如下面的代码,FINISHED实际上会打印,警告消息消失。为什么会这样?

data = np.empty([0,6])

manager = mp.Manager()

q = manager.Queue()

pool = mp.Pool(mp.cpu_count())

watcher = pool.apply_async(listener, (q,))

jobs = []

#for i in range(100):
 #   job = pool.apply_async(process_data, (x1, x2, x3,))
  #  jobs.append(job)

#for job in jobs:
 #   job.get()

q.put('kill')

pool.close()

0 个答案:

没有答案
相关问题