我正在经营芹菜3.1.25
我的主管.conf文件是这样的:
[group:proj]
programs=celerycam,celerybeat,celeryd-urgent,celeryd-default,celeryd-temp
priority=999
[program:gunicorn]
process_name=%(program_name)s
autorestart=true
command=/home/proj/.virutalenvs/proj_env/bin/gunicorn_django -c /home/proj/www/proj/gunicorn_conf.py
directory = /home/proj/www/proj
user = proj
stdout_logfile = /var/log/supervisor/proj/%(program_name)s.log
stderr_logfile = /var/log/supervisor/proj/error-%(program_name)s.log
stdout_logfile_maxbytes=25MB
stdout_logfile_backups=5
stderr_logfile_maxbytes=25MB
stderr_logfile_backups=5
[program:celerycam]
process_name=%(program_name)s
autorestart=true
command=/home/proj/.virtualenvs/proj/bin/python /home/proj/www/proj/manage.py celerycam
stdout_logfile = /var/log/supervisor/proj/%(program_name)s.log
stderr_logfile = /var/log/supervisor/proj/error-%(program_name)s.log
user = root
password = proj
stdout_logfile_maxbytes=25MB
stdout_logfile_backups=5
stderr_logfile_maxbytes=25MB
stderr_logfile_backups=5
[program:celeryd-temp]
process_name=%(program_name)s
autorestart=true
exitcodes=0,2
directory=/home/proj/www/proj
command=/home/proj/.virtualenvs/proj_env/bin/python /home/proj/www/proj/manage.py celery worker -E -- loglevel=DEBUG -Q urgent -n urgent --concurrency=8 --maxtasksperchild=1
stdout_logfile = /var/log/supervisor/proj/%(program_name)s.log
stderr_logfile = /var/log/supervisor/proj/%(program_name)s.log
stdout_logfile_maxbytes=25MB
stdout_logfile_backups=8
stderr_logfile_maxbytes=25MB
stderr_logfile_backups=8
[program:celeryd-default]
process_name=%(program_name)s
autorestart=true
directory=/home/proj/www/proj
command=/home/proj/.virtualenvs/proj_env/bin/python /home/proj/www/proj/manage.py celery worker -E -- loglevel=DEBUG -Q default -n default --concurrency=8 --maxtasksperchild=1
stdout_logfile = /var/log/supervisor/proj/%(program_name)s.log
stderr_logfile = /var/log/supervisor/proj/%(program_name)s.log
stdout_logfile_maxbytes=25MB
stdout_logfile_backups=5
stderr_logfile_maxbytes=25MB
stderr_logfile_backups=5
我是主管,一切看起来都很好,但是当我检查状态时
`sudo supervisorctl status`
我得到以下内容:
proj:celerybeat RUNNING pid 13030, uptime 0:13:27
proj:celerycam RUNNING pid 13015, uptime 0:13:28
proj:celeryd-default RUNNING pid 18845, uptime 0:00:02
proj:celeryd-temp STARTING
如果我再次检查状态,我会得到以下
proj:celerybeat RUNNING pid 13030, uptime 0:15:13
proj:celerycam RUNNING pid 13015, uptime 0:15:14
proj:celeryd-default STARTING
proj:celeryd-temp RUNNING pid 19512, uptime 0:00:01
这是sudo supervisorctl tail proj:celeryd-default
打印的内容:
r removal in
version 4.0. Please use "group" instead (see the Canvas section in the userguide)
""")
Running a worker with superuser privileges when the
worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
/home/csanalytics/.virtualenvs/proj_env/local/lib/python2.7/site- packages/celery/task/sets.py:23: CDeprecationWarning:
celery.task.sets and TaskSet is deprecated and scheduled for removal in
version 4.0. Please use "group" instead (see the Canvas section in the userguide)
""")
我可以从我的终端上运行超级用户文件上的命令没有问题,但由于某种原因,他们在主管上崩溃。有什么想法吗?
答案 0 :(得分:0)
这是以超级用户身份运行的工作人员,这对于较新版本的Celery并不适用。
更具体地说,Celery 3.1+的工作人员不能很好地处理泡菜序列化。
您需要在芹菜配置中禁用pickle序列化
app.conf.update(
CELERY_ACCEPT_CONTENT = ['json'],
CELERY_TASK_SERIALIZER = 'json',
CELERY_RESULT_SERIALIZER = 'json',
)
并运行它。