Celery + Redis-延迟触发任务时Django阻止

时间:2019-03-12 19:28:10

标签: django redis celery

我在Redis的Django项目中设置了Celery。 计划的任务运行没有问题。 使用delay()触发异步任务时会出现问题。执行停止,就好像在kombu.utils.retry_over_time的循环中被阻塞了。

我检查了一下,Redis已启动并正在运行。 我真的不知道如何调试此问题。

以下是某些软件包版本

Django==2.1.2
celery==4.2.1
django-celery-beat==1.4.0
django-celery-results==1.0.4
redis==3.2.0
kombu==4.4.0

设置

CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 1 # # Redis DB number, if not provided the default will be 0
CELERY_REDIS_PASSWORD = ''

CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'

CELERY_BROKER_URL = 'redis://{host}:{port}/{db}'.format(host=CELERY_REDIS_HOST, port=CELERY_REDIS_PORT, db=CELERY_REDIS_DB)
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json' # Result serialization format
CELERY_TASK_SERIALIZER = 'json' # String identifying the serializer to be used

CELERY_BROKER_TRANSPORT_OPTIONS = {
    'visibility_timeout': 3600, # 1 hour, default Redis visibility timeout
}

Celery和Celery Beat的发布方式

将Celery和Celery Beat添加到主管的Shell脚本:

#!/usr/bin/env bash

# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/

# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/

# Get django environment variables
celeryenv=`cat ./env_vars | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}

# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/home/ubuntu/splityou/splityou
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery worker -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"

user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"


# Create CELERY BEAT configuration script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery beat -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"

directory=/home/ubuntu/splityou/splityou
user=celery
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /etc/supervisor/conf.d/celery.conf
echo "$celerybeatconf" | tee /etc/supervisor/conf.d/celerybeat.conf

# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /etc/supervisor/supervisord.conf
  then
    echo "[inet_http_server]" | tee -a /etc/supervisor/supervisord.conf
    echo "port = 127.0.0.1:9001" | tee -a /etc/supervisor/supervisord.conf
fi

# Reread the supervisord config
sudo supervisorctl reread

# Update supervisord in cache without restarting all services
sudo supervisorctl update

# Sleep for 15 seconds to give enough time to previous supervisor instance to shutdown
# Source: https://stackoverflow.com/questions/50135628/celery-django-on-elastic-beanstalk-causing-error-class-xmlrpclib-fault/50154073#50154073
sleep 15

# Start/Restart celeryd through supervisord
sudo supervisorctl restart celeryd
sudo supervisorctl restart celerybeat

1 个答案:

答案 0 :(得分:1)

正如First steps with Django Celery教程中指出的那样,我们必须在proj/__init__.py模块中导入应用程序对象。 这样可以确保在Django启动时始终导入该应用,以便shared_task将使用该应用。

我完全忘记了它,所以我通过在__init__.py中放入以下内容来解决了这个问题:

from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app

__all__ = ('celery_app',)