rabbitmq排队填满芹菜任务

时间:2015-02-21 20:03:12

标签: python multithreading rabbitmq celery mnesia

我正在使用Celery通过其IP地址调用多个硬件单元。每个单元将返回一个值列表。应用程序代码

# create a list of tasks
modbus_calls = []
for site in sites:
    call = call_plc.apply_async((site.name, site.address), expires=120)  # expires after 2 minutes?
    modbus_calls.append(call)

# below checks all tasks are complete (values returned), then move forward out of the while loop 
ready_list = [False]
while not all(ready_list):
    ready_list = []
    for task in modbus_calls:
        ready_list.append(task.ready())

# once here, all tasks have returned their values. use the task.get() method to obtain the list of values

在tasks.py文件中,call_plc任务定义为

@app.task
def call_plc(sitename, ip_address):
    vals = pc.PLC_Comm().connect_to(sitename, ip_address)
    return vals

发生了什么:我只能在rabbitmq开始崩溃(内存不足)之前运行此应用程序一定次数。我查看/var/lib/rabbitmq/mnesia/rabbit@mymachine/queues,我看到一堆带有uuid名字的队列。这些uuid名称与任务ID的名称不匹配(从我的应用程序中的print task.id学习)。每次运行应用程序时,都会在此文件夹中添加n个队列,其中n = number of sites to call

重置rabbitmq后第一次运行应用程序时,它会添加n+1个队列

我怎样才能使这些任务/队列不存在?一旦我得到结果,我就不再需要这个任务了。

task.forget()NotImplementedError('backend does not implement forget.')

而失败

任务到期设置似乎没有任何效果。我的celeryconfig文件如下:

BROKER_URL = 'amqp://webdev_rabbit:password@localhost:5672/celeryhost'
CELERY_RESULT_BACKEND = 'amqp://webdev_rabbit:password@localhost:5672/celeryhost'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT=['json']
CELERY_TIMEZONE = 'Europe/Oslo'
CELERY_ENABLE_UTC = True
CELERY_AMQP_TASK_RESULT_EXPIRES = 120

1 个答案:

答案 0 :(得分:2)

听起来你不想将RabbmitMQ作为结果后端使用,仅作为消息代理。请参阅上一个问题:Queues with random GUID being generated in RabbitMQ server