我使用芹菜来更新我的新闻聚合网站中的RSS源。我为每个Feed使用了一个@task,事情看起来效果很好。
有一个细节,我不能确定处理得好:所有的Feed都是每分钟用@periodic_task更新一次,但是当一个新的启动时,如果一个Feed仍在从上一个周期性任务中更新怎么办? (例如,如果Feed非常慢,或者离线并且任务保持在重试循环中)
目前我存储任务结果并检查其状态如下:
import socket
from datetime import timedelta
from celery.decorators import task, periodic_task
from aggregator.models import Feed
_results = {}
@periodic_task(run_every=timedelta(minutes=1))
def fetch_articles():
for feed in Feed.objects.all():
if feed.pk in _results:
if not _results[feed.pk].ready():
# The task is not finished yet
continue
_results[feed.pk] = update_feed.delay(feed)
@task()
def update_feed(feed):
try:
feed.fetch_articles()
except socket.error, exc:
update_feed.retry(args=[feed], exc=exc)
也许有一种更复杂/更健壮的方法可以使用我错过的一些芹菜机制来达到相同的效果?
答案 0 :(得分:41)
根据MattH的回答,您可以使用这样的装饰器:
def single_instance_task(timeout):
def task_exc(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
lock_id = "celery-single-instance-" + func.__name__
acquire_lock = lambda: cache.add(lock_id, "true", timeout)
release_lock = lambda: cache.delete(lock_id)
if acquire_lock():
try:
func(*args, **kwargs)
finally:
release_lock()
return wrapper
return task_exc
然后,像这样使用它......
@periodic_task(run_every=timedelta(minutes=1))
@single_instance_task(60*10)
def fetch_articles()
yada yada...
答案 1 :(得分:28)
答案 2 :(得分:12)
使用https://pypi.python.org/pypi/celery_once似乎做得非常好,包括报告错误和针对某些参数进行独特性测试。
您可以执行以下操作:
from celery_once import QueueOnce
from myapp.celery import app
from time import sleep
@app.task(base=QueueOnce, once=dict(keys=('customer_id',)))
def start_billing(customer_id, year, month):
sleep(30)
return "Done!"
只需要项目中的以下设置:
ONCE_REDIS_URL = 'redis://localhost:6379/0'
ONCE_DEFAULT_TIMEOUT = 60 * 60 # remove lock after 1 hour in case it was stale
答案 3 :(得分:8)
如果你正在寻找一个不使用Django的例子,那么try this example(警告:改为使用Redis,我已经在使用它)。
装饰器代码如下(完全归功于文章的作者,请阅读它)
import redis
REDIS_CLIENT = redis.Redis()
def only_one(function=None, key="", timeout=None):
"""Enforce only one celery task at a time."""
def _dec(run_func):
"""Decorator."""
def _caller(*args, **kwargs):
"""Caller."""
ret_value = None
have_lock = False
lock = REDIS_CLIENT.lock(key, timeout=timeout)
try:
have_lock = lock.acquire(blocking=False)
if have_lock:
ret_value = run_func(*args, **kwargs)
finally:
if have_lock:
lock.release()
return ret_value
return _caller
return _dec(function) if function is not None else _dec
答案 4 :(得分:2)
我想知道为什么没有人提到使用 celery.app.control.inspect().active() 来获取当前正在运行的任务列表。不是实时的吗?因为否则它会很容易实现,例如:
def unique_task(callback, *decorator_args, **decorator_kwargs):
"""
Decorator to ensure only one instance of the task is running at once.
"""
@wraps(callback)
def _wrapper(celery_task, *args, **kwargs):
active_queues = task.app.control.inspect().active()
if active_queues:
for queue in active_queues:
for running_task in active_queues[queue]:
# Discard the currently running task from the list.
if task.name == running_task['name'] and task.request.id != running_task['id']:
return f'Task "{callback.__name__}()" cancelled! already running...'
return callback(celery_task, *args, **kwargs)
return _wrapper
然后只是将装饰器应用于相应的任务:
@celery.task(bind=True)
@unique_task
def my_task(self):
# task executed once at a time.
pass
答案 5 :(得分:0)
芹菜在单一主机上工作的解决方案,其可靠性更高1.其他类型(没有像redis这样的依赖关系)基于锁的差异文件不适用于并发性更高1。
class Lock(object):
def __init__(self, filename):
self.f = open(filename, 'w')
def __enter__(self):
try:
flock(self.f.fileno(), LOCK_EX | LOCK_NB)
return True
except IOError:
pass
return False
def __exit__(self, *args):
self.f.close()
class SinglePeriodicTask(PeriodicTask):
abstract = True
run_every = timedelta(seconds=1)
def __call__(self, *args, **kwargs):
lock_filename = join('/tmp',
md5(self.name).hexdigest())
with Lock(lock_filename) as is_locked:
if is_locked:
super(SinglePeriodicTask, self).__call__(*args, **kwargs)
else:
print 'already working'
class SearchTask(SinglePeriodicTask):
restart_delay = timedelta(seconds=60)
def run(self, *args, **kwargs):
print self.name, 'start', datetime.now()
sleep(5)
print self.name, 'end', datetime.now()