Celery守护进程不适用于Centos 7

时间:2015-05-08 21:32:21

标签: python celery daemon centos7 systemd

我正在尝试在具有systemd / systemctl的Centos 7上运行celery守护程序。 它不起作用。

  • 我尝试了一个非守护进程的案例并且有效
  • 我运行~mytask并冻结在客户端计算机上以及运行celery守护程序的服务器上我没有记录任何内容。
  • 我注意到实际上没有芹菜进程在运行。

有关如何解决此问题的任何建议吗?

这是我的守护程序默认配置:

CELERYD_NODES="localhost.localdomain"
CELERY_BIN="/tmp/myapp/venv/bin/celery"
CELERY_APP="pipeline"
CELERYD_OPTS="--broker=amqp://192.168.168.111/"
CELERYD_LOG_LEVEL="INFO"
CELERYD_CHDIR="/tmp/myapp"
CELERYD_USER="root"

注意:我正在用

启动守护进程
sudo /etc/init.d/celeryd start

我得到了我的芹菜守护程序脚本: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd

我也试过了: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd 但是这个在尝试启动守护进程时向我显示错误:

systemd[1]: Starting LSB: celery task worker daemon...
celeryd[19924]: basename: missing operand
celeryd[19924]: Try 'basename --help' for more information.
celeryd[19924]: Starting : /etc/rc.d/init.d/celeryd: line 193: multi: command not found
celeryd[19924]: [FAILED]
systemd[1]: celeryd.service: control process exited, code=exited status=1
systemd[1]: Failed to start LSB: celery task worker daemon.
systemd[1]: Unit celeryd.service entered failed state.

2 个答案:

答案 0 :(得分:4)

正如@ChillarAnand之前所回答的那样,请不要使用celeryd

但实际上并不像他用systemd用celery multi编写芹菜一样简单。

这是我的工作,非显而易见(我认为)的例子。

他们已经在 Centos 7.1.1503 上进行了测试,其中芹菜3.1.23 (Cipater)在virtualenv中运行,其中的tasks.py示例应用来自Celery tutorial

运行单个工作人员

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! using the below systemd is REQUIRED in this case!
# (you will still get a warning "PID file /var/run/celery/single.pid not readable (yet?) after start." from systemd but service will in fact be starting, stopping and restarting properly. I haven't found a way to get rid of this warning.)
PIDFile=/var/run/celery/single.pid

# !!! using --pidfile option here and below is REQUIRED in this case!
# !!! also: don't use "%n" in pidfile or logfile paths - you will get these files named after the systemd service instead of after the worker (?)
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start single-worker -A tasks --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

运行多个工作人员

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! in this case DON'T set PIDFile or use --pidfile or --logfile below or it won't work!
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start 3 -A tasks "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait 3

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart 3

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

(请注意,我正在使用-c / --concurrency> 1运行工作人员,但它也可以设置为1或默认设置。如果您不使用virtualenv,这应该有效,但我强烈建议你使用它。)

我真的不明白为什么systemd在第一种情况下无法猜测分叉进程的PID以及为什么将pidfiles放在特定位置会破坏第二种情况,所以我在这里提交了一张票:https://github.com/celery/celery/issues/3459 。如果我自己得到答案或提出一些解释,那么我会在这里发布。

答案 1 :(得分:2)

{"count":0}已被删除。如果您能够以非守护进程模式运行,请说

celeryd

您可以使用celery multi

简单地对其进行守护
celery worker -l info -A my_app -n my_worker

话虽如此,如果您仍想使用celery multi my_worker -A my_app -l info try these steps