无限循环以保持消耗队列

时间:2014-09-03 16:43:26

标签: python rabbitmq

我正在从队列中处理数据以进行处理。我的目标是让数据不断处理并且没有错误导致应用程序崩溃,因此我记录异常并尝试让程序继续运行。为此,我在无限循环中嵌套了cosume语句,但它似乎并没有起作用。通常我会来参加该计划并看到它表示" [x]完成"等待,我可以看到队列中有大量数据。

这是我的代码片段:

def callback(ch, method, properties, body):
    print " [x] Received %r" % (body,)
    doWork(body)
    print " [x] Done"
    ch.basic_ack(delivery_tag = method.delivery_tag)

channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='dataProcessingQueue')
while True:
    try:
        channel.start_consuming()
    except:
        time.sleep(10)

我做错了什么?如果我的队列有3000个条目,这将工作10-15%然后由于某种原因只是挂起。我在使用while循环时出错了吗?

2 个答案:

答案 0 :(得分:1)

您应该在回调中进行错误处理。我不确定在出错之后是否合法调用start_consuming()(其内部状态可能处于某种错误状态)。并且您应该记录您获得的错误,以便了解发生了什么,并且可以优化异常处理程序以仅捕获可恢复的错误。我无法对此进行测试,因此请原谅任何小错误。

import logging
import traceback

# NOTE: Just a simple logging config here, you can get fancier
logging.basicConfig(level=logging.DEBUG)

def callback(ch, method, properties, body):
    logger = logging.getLogger('callback')
    try:
        logger.info(" [x] Received %r" % (body,))
        doWork(body)
        logger.info(" [x] Done")
    except Exception, e:
        # get granular over time as you learn what
        # errors you get because some things like
        # SyntaxError should not be dropped
        logger.error("Exception %s: %s" %(type(e),e))
        logger.debug(traceback.format_exc())
    finally:
        # set to always ack... even on failure
        ch.basic_ack(delivery_tag = method.delivery_tag)

channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='dataProcessingQueue')
channel.start_consuming()

答案 1 :(得分:0)

我看到你在这里使用RabbitMQ。如果是这样,这就是你要做的事情:

def callback(ch, method, properties, body):
    print " [x] Received %r" % (body,)
    doWork(body)
    print " [x] Done"
    ch.basic_ack(delivery_tag = method.delivery_tag)

channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='dataProcessingQueue')
channel.start_consuming()

是的,没有用True循环来包装start_consuming函数。'

参考 RabbitMQ Tutorial