Celery并不尊重队列属性

时间:2013-05-16 17:32:36

标签: celery

因此,对于Celery 3.0.19上的一些任务,Celery显然不尊重队列属性,而是将任务运送到默认的芹菜队列

/This is a stupid test with the proprietary code ripped out.  
def run_chef_task(task, **env):
if env is None:
    env = {}
if not task_name is None:
    env['CHEF'] = task_name

print env
cmd = []
if len(env):
    cmd = ['env']
    for key, value in env.items():
        if not isinstance(key, str) or not isinstance(value, str):
            raise TypeError(
                "Environment Values must be strings ({0}, {1})"\
                .format(key, value))
        key = "ND" + key.upper()
        cmd.append('%s=%s' % (key, value))


cmd.extend(['/root/chef/run_chef', 'noudata_default'])
print cmd
ret = " ".join(cmd)
ret = subprocess.check_call(cmd)
print 'CHECK'
return ret,cmd

r = run_chef_task.apply_async(args = ['mongo_backup],queue ='my_special_queue_with_only_one_worker') r.get()#立即返回

去开花。查找任务。查找该任务运行的工作人员。看到工人是不同的,并且任务运行的工人不是特殊工人。确认Flower说'special_worker'仅在'my_special_queue'上,且'special_worker'不在'my_special_queue'上。

现在,这是非常有趣的部分:

拉出经纪人的rabbitmq-management(并确认经纪人是经纪人) 在正确的队列上通过代理发送了一条消息给正确的工作者(已验证)。紧接着,在芹菜队列上发送了另一条消息

在工作人员的日志文件中,它表示已接受并完成任务:

[2013-05-16 02:24:15,455: INFO/MainProcess] Got task from broker: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c]
[2013-05-16 02:24:15,456: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x2479c08> (args:('noto.tasks.chef_tasks.run_chef_task', '0dba1107-2bb5-4c19-8df3-8a74d8e1234c', ['mongo_backup'], {}, {'utc': True, 'is_eager': False, 'chord': None, 'group': None, 'args': ['mongo_backup'], 'retries': 0, 'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}, 'expires': None, 'task': 'noto.tasks.chef_tasks.run_chef_task', 'callbacks': None, 'errbacks': None, 'hostname': 'manager1.i-6e958f0f', 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '0dba1107-2bb5-4c19-8df3-8a74d8e1234c'}) kwargs:{})
// This is output from the task
[2013-05-16 02:24:15,459: WARNING/PoolWorker-1] {'CHEF': 'mongo_backup'}

[2013-05-16 02:24:15,463: WARNING/PoolWorker-1] ['env', 'NDCHEF=mongo_backup', '/root/chef/run_chef', 'default']
[2013-05-16 02:24:15,477: DEBUG/MainProcess] Task accepted: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] pid:17210
...A bunch of boring debug logs repeating the registered tasks
[2013-05-16 02:31:45,061: INFO/MainProcess] Task noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] succeeded in 88.438395977s: (0, ['env', 'NDCHEF=mongo_backup',...

因此,它接受任务,运行任务,并在另一个队列上完全触发另一个队列,以便在相同时间运行它而不是正确返回。我唯一能想到的是这个工人是唯一拥有正确来源的工人。所有其他工作者都有旧的源,子进程调用被注释掉,因此它们或多或少地立即返回。

有没有人知道造成这种情况的原因是什么?这不是我们看到这种情况发生的唯一任务,因为它似乎从芹菜队列中选择了3台随机机器来运行它。我们用celeryconfig做了一些奇怪的事情可能导致了这个吗?

1 个答案:

答案 0 :(得分:1)

您的TaskPool日志建议没有明确的路由,请参阅routing_key&amp;默认'交换':

'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}

我有一个猜测,问题是开箱即用的自动默认值。考虑一下在celery配置中测试显式手动路由。

http://docs.celeryproject.org/en/latest/userguide/routing.html#manual-routing

例如:

CELERY_ROUTES = {
"work-queue": {
    "queue": "work_queue",
    "binding_key": "work_queue"
},
"new-feeds": {
    "queue": "new_feeds",
    "binding_key": "new_feeds"
},
}

CELERY_QUEUES = {
"work_queue": {
    "exchange": "work_queue",
    "exchange_type": "direct",
    "binding_key": "work_queue",
},
"new_feeds": {
    "exchange": "new_feeds",
    "exchange_type": "direct",
    "binding_key": "new_feeds"
},
}