Celery:如何将失败的任务路由到死信队列

时间:2016-06-29 22:31:58

标签: python rabbitmq celery

我是celery的新手,我尝试将此任务队列集成到我的项目中,但我仍然不知道celery如何处理失败的任务,我想将所有这些保存在一个amqp死信中队列中。

根据文档here,似乎在启用了acks_late的任务中提出拒绝会产生与查询消息相同的效果,然后我们会有一些关于死信队列的话。

所以我在我的celery config

中添加了一个自定义默认队列
celery_app.conf.update(CELERY_ACCEPT_CONTENT=['application/json'],
                       CELERY_TASK_SERIALIZER='json',
                       CELERY_QUEUES=[CELERY_QUEUE,
                                      CELERY_DLX_QUEUE],
                       CELERY_DEFAULT_QUEUE=CELERY_QUEUE_NAME,
                       CELERY_DEFAULT_EXCHANGE=CELERY_EXCHANGE
                       )

我的kombu对象看起来像

CELERY_DLX_EXCHANGE = Exchange(CELERY_DLX_EXCHANGE_NAME, type='direct')
CELERY_DLX_QUEUE = Queue(CELERY_DLX_QUEUE_NAME, exchange=DLX_EXCHANGE,
                             routing_key='celery-dlq')

DEAD_LETTER_CELERY_OPTIONS = {'x-dead-letter-exchange': CELERY_DLX_EXCHANGE_NAME,
                          'x-dead-letter-routing-key': 'celery-dlq'}

CELERY_EXCHANGE = Exchange(CELERY_EXCHANGE_NAME,
                               arguments=DEAD_LETTER_CELERY_OPTIONS,
                               type='direct')

CELERY_QUEUE = Queue(CELERY_QUEUE_NAME,
                         exchange=CELERY_EXCHANGE,
                         routing_key='celery-q')

我正在执行的任务是:

class HookTask(Task):
    acks_late = True

def run(self, ctx, data):
    logger.info('{0} starting {1.name}[{1.request.id}]'.format(self.__class__.__name__.upper(), self))
    self.hook_process(ctx, data)


def on_failure(self, exc, task_id, args, kwargs, einfo):
    logger.error('task_id %s failed, message: %s', task_id, exc.message)

def hook_process(self, t_ctx, body):
    # Build context
    ctx = TaskContext(self.request, t_ctx)
    logger.info('Task_id: %s, handling request %s', ctx.task_id, ctx.req_id)
    raise Reject('no_reason', requeue=False)

我用它做了一点测试但是在提出拒绝异常时没有结果。

现在我想知道通过覆盖Task.on_failure强制将失败的任务路由强制到死信队列是个好主意。我认为这会有效,但我也认为这个解决方案不是那么干净,因为根据我的红芹菜应该单独做这件事。

感谢您的帮助。

2 个答案:

答案 0 :(得分:2)

我认为你不应该在CELERY_EXCHANGE中添加map。您应该使用arguments=DEAD_LETTER_CELERY_OPTIONS将其添加到CELERY_QUEUE。

以下示例是我所做的,它工作正常:

queue_arguments=DEAD_LETTER_CELERY_OPTIONS

创建队列后,您应该在“功能”上看到这一点。列,它显示from celery import Celery from kombu import Exchange, Queue from celery.exceptions import Reject app = Celery( 'tasks', broker='amqp://guest@localhost:5672//', backend='redis://localhost:6379/0') dead_letter_queue_option = { 'x-dead-letter-exchange': 'dlx', 'x-dead-letter-routing-key': 'dead_letter' } default_exchange = Exchange('default', type='direct') dlx_exchange = Exchange('dlx', type='direct') default_queue = Queue( 'default', default_exchange, routing_key='default', queue_arguments=dead_letter_queue_option) dead_letter_queue = Queue( 'dead_letter', dlx_exchange, routing_key='dead_letter') app.conf.task_queues = (default_queue, dead_letter_queue) app.conf.task_default_queue = 'default' app.conf.task_default_exchange = 'default' app.conf.task_default_routing_key = 'default' @app.task def add(x, y): return x + y @app.task(acks_late=True) def div(x, y): try: z = x / y return z except ZeroDivisionError as exc: raise Reject(exc, requeue=False) (死信交换)和DLX(死信 - 路由 - 密钥)标签。

enter image description here

注意:如果您已经在RabbitMQ中创建了它们,则应该删除以前的队列。这是因为芹菜不会删除现有队列并重新创建新队列。

答案 1 :(得分:1)

我遇到类似的情况,我遇到了同样的问题。我还想要一个基于配置而不是硬编码值的解决方案。提出的Hengfeng Li解决方案非常有用,帮助我理解了机制和概念。但是死信队列的声明存在问题。具体来说,如果您在task_default_queues中注入了DLQ,则Celery正在使用队列并且它始终为空。因此需要一种手动方式来声明DL(X / Q)。

我使用了Celery的Bootsteps,因为它们可以很好地控制代码运行的阶段。我最初的实验是在创建应用程序之后创建它们,但这会在分析进程后创建停滞的连接,并且它创建了一个丑陋的异常。使用在Pool步骤之后运行的bootstep,可以保证它在分叉并且连接池准备就绪后在每个worker的开头运行。

最后,我创建了一个装饰器,通过重新添加芹菜的Reject来将未捕获的异常转换为任务拒绝。如果任务已经决定如何处理,例如重试,则需要特别小心。

这是一个完整的工作示例。尝试运行任务div.delay(1, 0)并查看其工作原理。

from celery import Celery
from celery.exceptions import Reject, TaskPredicate
from functools import wraps
from kombu import Exchange, Queue

from celery import bootsteps


class Config(object):

    APP_NAME = 'test'

    task_default_queue = '%s_celery' % APP_NAME
    task_default_exchange = "%s_celery" % APP_NAME
    task_default_exchange_type = 'direct'
    task_default_routing_key = task_default_queue
    task_create_missing_queues = False
    task_acks_late = True

    # Configuration for DLQ support
    dead_letter_exchange = '%s_dlx' % APP_NAME
    dead_letter_exchange_type = 'direct'
    dead_letter_queue = '%s_dlq' % APP_NAME
    dead_letter_routing_key = dead_letter_queue


class DeclareDLXnDLQ(bootsteps.StartStopStep):
    """
    Celery Bootstep to declare the DL exchange and queues before the worker starts
        processing tasks
    """
    requires = {'celery.worker.components:Pool'}

    def start(self, worker):
        app = worker.app

        # Declare DLX and DLQ
        dlx = Exchange(
            app.conf.dead_letter_exchange,
            type=app.conf.dead_letter_exchange_type)

        dead_letter_queue = Queue(
            app.conf.dead_letter_queue,
            dlx,
            routing_key=app.conf.dead_letter_routing_key)

        with worker.app.pool.acquire() as conn:
            dead_letter_queue.bind(conn).declare()


app = Celery('tasks', broker='pyamqp://guest@localhost//')
app.config_from_object(Config)


# Declare default queues
# We bypass the default mechanism tha creates queues in order to declare special queue arguments for DLX support
default_exchange = Exchange(
    app.conf.task_default_exchange,
    type=app.conf.task_default_exchange_type)
default_queue = Queue(
        app.conf.task_default_queue,
        default_exchange,
        routing_key=app.conf.task_default_routing_key,
        queue_arguments={
            'x-dead-letter-exchange': app.conf.dead_letter_exchange,
            'x-dead-letter-routing-key': app.conf.dead_letter_routing_key
        })

# Inject the default queue in celery application
app.conf.task_queues = (default_queue,)

# Inject extra bootstep that declares DLX and DLQ
app.steps['worker'].add(DeclareDLXnDLQ)


def onfailure_reject(requeue=False):
    """
    When a task has failed it will raise a Reject exception so
    that the message will be requeued or marked for insertation in Dead Letter Exchange
    """

    def _decorator(f):
        @wraps(f)
        def _wrapper(*args, **kwargs):

            try:
                return f(*args, **kwargs)
            except TaskPredicate:
                raise   # Do not handle TaskPredicate like Retry or Reject
            except Exception as e:
                print("Rejecting")
                raise Reject(str(e), requeue=requeue)
        return _wrapper

    return _decorator


@app.task()
@onfailure_reject()
def div(x, y):
    return x / y

编辑:我更新了代码以使用celery的新配置架构(小写),因为我在Celery 4.1.0中发现了一些兼容性问题。