我正在编写一种优化算法,该算法使用几种不同的初始条件来增加找到全局最优的机会。我正在尝试通过使用多处理库并在不同进程上运行优化来使代码运行得更快。
这是我的代码现在基本上正常工作的方式:
from multiprocessing import Process, Queue
from SupportCostModel.SupportStructure import SupportStructure, SupportType
# Method the processes will execute
def optimizeAlgoritm(optimizeObject, qOut):
optimizeObject.Optimize()
qOut.put(optimizeObject)
# Method the main thread will execute
def getOptimumalObject(n):
for i in range(n):
# Create a new process with a new nested object that should be optimized
p = Process(target = optimizeAlgoritm, args = (SupportStructure(SupportType.Monopile), qOut))
processes.append(p)
p.deamon = True
p.start()
# Part the main thread is running
if __name__ == '__main__':
qOut = Queue()
processes = []
# Run the code on 6 processes
getOptimumalObject(6)
for i in range(len(processes)):
processes[i].join()
# Get the best optimized object and print the resulting value
minimum = 1000000000000000000000000.
while not qOut.empty():
optimizeObject = qOut.get()
if optimizeObject.GetTotalMass() < minimum:
bestObject = optimizeObject
minumum = optimizeObject.GetTotalMass()
print(bestObject.GetTotalMass())
只要我只使用4个进程,此代码就可以运行。如果我运行超过4,比如示例中的6,那么两个进程将停留在代码的末尾,代码将永远不会停止运行,因为主线程仍然停留在processes[i].join()
。我认为这两个进程在optimizeAlgorithm中的qOut.put()
中存在问题。当我删除qOut.put()
代码退出时给出错误,即bestObject不存在,正如预期的那样。然而,奇怪的是,如果我打印,例如,qOut.put()
之后它将打印它的最小对象,但是使用0%的CPU,该过程将保持活着状态。这迫使主要代码保持活力。
我对多处理非常陌生,并且认为OOP和多处理并不总能很好地协同工作。我在这里使用了错误的方法吗?它有点令人沮丧,因为它几乎可以工作,但不适用于超过4个过程。
提前致谢!
答案 0 :(得分:0)
我使用管道来发送我的物品!
这是我使用的代码:
from multiprocessing import Process, Pipe
from SupportCostModel.SupportStructure import SupportStructure, SupportType
import random
# Method the processes will execute
def optimizeAlgoritm(optimizeObject, conn):
optimizeObject.Optimize()
# Send the optimized object
conn.send(optimizeObject)
# Method the main thread will execute
def getOptimumalObject(n):
connections = []
for i in range(n):
# Create a pipe for each of the processes that is started
parent_conn, child_conn = Pipe()
# Save the parent connections
connections.append(parent_conn)
# Create objects that needs to by optimized using different initial conditions
if i == 0:
structure = SupportStructure(SupportType.Monopile)
else:
structure = SupportStructure(SupportType.Monopile)
structure.properties.D_mp = random.randrange(4., 10.)
structure.properties.Dtrat_tower = random.randrange(90., 120.)
structure.properties.Dtrat_mud = random.randrange(60., 100.)
structure.properties.Dtrat_mp = random.randrange(60., 100.)
structure.UpdateAll()
# Create a new process with a new nested object that should be optimized
p = Process(target = optimizeAlgoritm, args = (structure, child_conn))
processes.append(p)
p.deamon = True
p.start()
# Receive the optimized objects
for i in range(n):
optimizedObjects.append(connections[i].recv())
# Part the main thread is running
if __name__ == '__main__':
processes = []
optimizedObjects = []
# Run the code on 6 processes
getOptimumalObject(6)
for i in range(len(processes)):
processes[i].join()
# Get the best optimized object and print the resulting value
minimum = 1000000000000000000000000.
for i in range(len(optimizedObjects)):
optimizeObject = optimizedObjects[i]
if optimizeObject.GetTotalMass() < minimum:
bestObject = optimizeObject
minumum = optimizeObject.GetTotalMass()
print(bestObject.GetTotalMass())