我想在Python中实现以下工作流程。在示例中,我有5个并行运行的进程。一个进程被定义为manager,而其他进程被定义为所有worker。经理以循环方式多次执行控制程序,直到所有工人停止为止。
关于这个问题的主要想法是每个工人都从一个独特的工作清单开始。然而,当他们正在处理他们的工作时,他们会将更多的工作添加到他们自己的列表中,并且在某些情况下,工人会重复工作。经理职能是避免重复工作。
我目前正在尝试使用Python多处理和管道。这是我的代码,只是为了对通信进行测试:
import multiprocessing as mp
import time
import random
def checkManager(i, data_pipe, com_pipe, work_list):
com_out, com_in = com_pipe
if com_out.poll(): ##check if there is anything to read
msg = com_out.recv()
print i, " received message ", msg
if msg == "SEND":
data_out, data_in = data_pipe
data_in.send(work_list)
work_list = com_out.recv()
return work_list
def myfunc(i, data_pipe, com_pipe, work_list):
print "starting worker ", i, " with list: ", work_list
while work_list != []:
time.sleep(3) ##sleep just to simulate some work delay
work_list = checkManager(i, data_pipe, com_pipe, work_list) ##check if manager wants to comunicate
print "stopping worker ", i
print "Starting..."
data_pipe = mp.Pipe() ##pipe to receive work lists from all workers
pipes = [] ##comunication pipe for each worker
workers = []
##spawn workers
for i in range(0,4):
pipe = mp.Pipe() ##pipe for comunication for that process
r = random.randint(10, 100) ##create random list just to test
p = mp.Process(target=myfunc,args=(i, data_pipe, pipe, range(r))) ##create process
pipes.append(pipe)
workers.append(p)
p.start()
index = 0
stopped_workers = []
data_out, data_in = data_pipe
while len(stopped_workers) != len(workers):
time.sleep(2)
for i in range(0, len(workers)):
if i in stopped_workers: ##check if wworker has already stopepd
continue
r = random.randint(0,100) ##just to avoid send the request all the times..
if r > 80:
print "Comunication with ", i
output, input = pipes[i]
input.send("SEND") ## send message
work_list = data_out.recv() #block untill receive data
print "data: ", work_list
input.send([]) ##send an empty list just to test if it stops
stopped_workers.append(i) ##add to the workers that already stopped
print "Stoping main"
在这个简单的测试中,一切都运行良好,但我希望这样做尽可能高效,并且我的代码中有一些我不喜欢的东西。
首先,我认为如果我有一个机制可以向工作进程发送信号而不是让他们不时检查一个函数,那么效率会更高。我试图使用信号,但从未管理它正常工作。 除此之外,我创建了与进程数量一样多的管道,我不确定这是否是最佳解决方案。我已经使用multiprocessing.Pool检查了一些例子,但是这对我的问题看起来不是一个好的解决方案。
刚刚完成,使用python MPI库实现所有内容会更好吗?
提前致谢