由于最大实例数,Apscheduler跳过作业执行

时间:2015-10-29 03:50:59

标签: python subprocess threadpool python-multithreading apscheduler

我正在尝试使用APScheduler使用IntervalTrigger运行定期作业,我故意将最大运行实例数设置为1,因为我不希望作业重叠。

问题是,经过一段时间后,调度程序开始报告已经达到作业的最大运行实例数,即使之前已通知作业已成功完成,我在日志中发现了这一点:

2015-10-28 22:17:42,137 INFO     Running job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:18:42 VET)" (scheduled at 2015-10-28 22:17:42-04:30)
2015-10-28 22:17:44,157 INFO     Job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:18:42 VET)" executed successfully

2015-10-28 22:18:42,335 WARNING  Execution of job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:18:42 VET)" skipped: maximum number of running instances reached (1)

2015-10-28 22:19:42,171 WARNING  Execution of job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:19:42 VET)" skipped: maximum number of running instances reached (1)

2015-10-28 22:20:42,181 WARNING  Execution of job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:20:42 VET)" skipped: maximum number of running instances reached (1)

2015-10-28 22:21:42,175 WARNING  Execution of job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:21:42 VET)" skipped: maximum number of running instances reached (1)

2015-10-28 22:22:42,205 WARNING  Execution of job "ping (trigger: interval[0:01:00], next run at: 2015-10-28 22:22:42 VET)" skipped: maximum number of running instances reached (1)

正如您在日志中看到的那样,报告ping作业已成功执行,但是从该点开始下一次执行后不久。

这是我用来安排工作的代码:

    executors = {'default': ThreadPoolExecutor(10)}
    jobstores = {'default': SQLAlchemyJobStore(url='sqlite:///jobs.sqlite')}
    self.scheduler = BackgroundScheduler(executors = executors,jobstores=jobstores)
    ...
    self.scheduler.add_job(func=func,
                               trigger=trigger,
                               kwargs=kwargs,
                               id=plan_id,
                               name=name,
                               misfire_grace_time=misfire_grace_time,
                               replace_existing=True)

正在运行的函数本身启动一些线程以在多个网络节点上执行ping命令并将结果保存到文件

threads = []
for link in links:
    thread = Thread(target = ping_test, args = (link,count,interval,timeout))
    threads.append(thread)
    thread.start()
for thread in threads:
    thread.join()

请注意,超时设置为远低于触发间隔的数字,因此当下一次运行触发时,作业仍然无法执行。

对此问题的任何见解都非常感谢。

0 个答案:

没有答案