我有一个功能,我开始一个示例作业并相应地更新数据库。我的功能是这样的:
def schedule_sample_job(hosts):
parent_job_id = 'SampleBatchJob|%s|Phantom' % uuid.uuid1()
child_jobs = []
host_entries = []
for host in hosts:
job_id = 'SampleJob|%s|Phantom' % uuid.uuid1()
res = chain(
add.si(2, 2),
add.si(3, 3),
throw_exception.si('4'),
mul.si(4, 4),
mul.si(5, 5)
)
capacity_task = res.apply_async(
serializer='json',
link=[change_job_status.s(job_id, 'SUCCESS')],
link_error=[change_job_status.s(job_id, 'FAILED'), change_parent_job_status.s(parent_job_id)]
)
host_entry = {'host': host}
host_entries.append(host_entry)
celery_util = CeleryUtil()
celery_util.store_chain_job(
job_id,
capacity_task,
parent_job_id,
'anonymous',
[host_entry]
)
child_jobs.append(job_id)
celery_util = CeleryUtil()
celery_util.store_chain_job(parent_job_id, child_jobs, None, 'anonymous', host_entries)
job_status = celery_util.build_job_status(parent_job_id)
return job_status
正如您所看到的,我尝试为每个主机启动一项工作,并在link_error中有多个任务,我希望这就像一个链,其中一个任务执行后另一个完成执行。这在我的程序中不会发生这种情况。任何帮助将受到高度赞赏。