Celery 3.1.19为每个worker创建了太多线程,导致服务器超载100%CPU无法创建更多线程

时间:2015-12-09 18:44:46

标签: rabbitmq celery

用例:在Django框架(版本1.6)中使用celery来安排基本上写入数据库的任务。我只有一个自定义队列,芹菜节拍调度程序将任务放在上面。创建了一个监听此队列的芹菜工作者,该队列的并发性为8

问题:8名个体工作者中的每一个都开始创建永不回收的线程(我的猜测)。这导致太多线程(我已经看到计数达到20k线程)。在4-5小时的时间内,线程数接触10k!

错误我看到:无法启动新线程。

关于谁正在开始新线程的Python回溯给了我:调用django save创建了一个新线程。 “adgroup”这里是一个django模型对象

[2015-12-03 18:40:17,133: WARNING/Worker-3] adgroup.save(update_fields=['bids_today', 'impressions_today', 'spent_today', 'last_metric_update_time'])
[2015-12-03 18:40:17,887: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/django/db/models/base.py", line 545, in save
[2015-12-03 18:40:17,887: WARNING/Worker-3] force_update=force_update, update_fields=update_fields)
[2015-12-03 18:40:18,715: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/django/db/models/base.py", line 582, in save_base
[2015-12-03 18:40:18,716: WARNING/Worker-3] update_fields=update_fields, raw=raw, using=using)
[2015-12-03 18:40:18,716: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 185, in send
[2015-12-03 18:40:18,716: WARNING/Worker-3] response = receiver(signal=self, sender=sender, **named)
[2015-12-03 18:40:19,300: INFO/MainProcess] Task ExtendTV.celery_tasks.stats_collector.collectAdGroupMetricsTask[2ae52b3d-77b9-46d3-93ac-d7fad9b96382] succeeded in 26.486441362s: None
[2015-12-03 18:40:19,395: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/haystack/signals.py", line 48, in handle_save
[2015-12-03 18:40:19,593: WARNING/Worker-3] index.update_object(instance, using=using)
[2015-12-03 18:40:19,593: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/haystack/indexes.py", line 274, in update_object
[2015-12-03 18:40:19,593: WARNING/Worker-3] backend.update(self, [instance])
[2015-12-03 18:40:19,593: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/haystack/backends/whoosh_backend.py", line 208, in update
[2015-12-03 18:40:20,515: WARNING/Worker-3] writer.commit()
[2015-12-03 18:40:20,516: WARNING/Worker-3] File "/home/ec2-user/venv/local/lib/python2.7/dist-packages/whoosh/writing.py", line 1043, in commit
[2015-12-03 18:40:21,318: WARNING/Worker-3] self.start()
[2015-12-03 18:40:21,642: WARNING/Worker-3] File "/usr/lib64/python2.7/threading.py", line 748, in start
[2015-12-03 18:40:22,340: WARNING/Worker-3] _start_new_thread(self.__bootstrap, ())
[2015-12-03 18:40:22,340: WARNING/Worker-3] error: can't start new thread

其他信息: 从图中可以看出,内存完全在正常范围内。 先前版本的celery 3.0.x中不存在此“线程问题”。然而,这里的记忆变得非常高

Celery命令我用来创建一个worker:

celery -A ProjectName worker -l DEBUG -Q ExampleQueueName

我使用的芹菜设置:

CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_EXCHANGE_TYPE = 'direct'
CELERY_DEFAULT_ROUTING_KEY = 'default'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_RESULT_EXPIRES=60*60*24
CELERYD_PREFETCH_MULTIPLIER = 128

其他相关设置: 使用rabbitmq 3.5.4作为消息代理

更新:

def collectAdGroupMetricsTask(*args, **kwargs):
    try:
        adgroup = AdGroup.objects.get(id=kwargs.get("adgroupID"))
        collectAdGroupMetrics(adgroup)
    except Exception as e:
        logger.error("Could not retreive AdGroup for collectAdGroupMetrics. " + str(e))
    return

def collectAdGroupMetrics(adgroup, currDate=None):
    Value1=function1_making_another_db_call()
    Value2=function2_making_another_db_call()
    adgroup.fieldname1 = Value1
    adgroup.fieldname2 = Value2    
    adgroup.save(update_fields=['fieldname1', 'fieldname2'])

Example of worker process having lots of threads.

2 个答案:

答案 0 :(得分:0)

whoosh(python包)试图获取写锁定并等待导致创建如此多的threads.Hence从django中已安装的应用程序列表中删除了whoosh。 还在芹菜中使用maxtasksperchild配置来​​防止内存不断增长。

答案 1 :(得分:0)

  • 首先在pythonic虚拟环境中安装 gevent 软件包。
  • 下一步对运行 celery
  • 的命令进行了一些更改
  • 最后,我附加了参数--pool gevent。默认情况下, celery 使用池“ prefork”,该池应该有一些错误。
  • 选择 celery 的进程数量后,降低到子进程(并发)的数量。