我正在查询可能返回不完整结果的API。对于每个完整的结果,我想开始一个新的过程。
每隔几秒钟我想再次查询API并检查我是否有任何新结果。如果是这样,我应该开始一个新的过程(前一个过程仍在运行),依此类推。从第一个查询到API,我知道我应该期望的结果数量(这将等于我想要运行的进程数)。
以下是我正在尝试的一些代码:
from bs4 import BeautifulSoup
import urllib
import time
from multiprocessing import Process
def someFunction(task):
timeout = time.time() + 10*120 # 120 minutes from now
while True:
time.sleep(2)
#do something
if time.time() > timeout:
break
if __name__=='__main__':
processes_started = []
tasks = [1] #Just initialize so that the 'while' loop can start. It'll then change value
while len(processes_started)<len(tasks):
r = urllib.urlopen(URL).read()
soup = BeautifulSoup(r, "lxml")
#'Tasks' will be of the correct length from the 1st call but may not contain all the data needed e.g. 'task.description'
tasks = soup.find_all("task")
for task in tasks:
if task.description not in processes_started:
processes_started.append(task.description)
p = Process(target = someFunction, args=(task,))
p.start()
p.join()
time.sleep(2)
但是,上面的代码只是等待每个进程完成并在可能的情况下启动一个新进程。我做错了什么?