Flask + Celery作为守护程序

时间:2019-02-22 00:00:14

标签: python docker flask celery

我尝试倾斜Python Flask,并想使用芹菜。分布式任务运行良好,但是现在我想将其配置为Daemon,如celery文档中所述。但是我遇到了celery_worker_1 exited with code 0错误。

项目结构:

celery
|-- flask-app
|   `-- app.py
|-- worker
|   |-- celeryd
|   |-- celeryd.conf
|   |-- Dockerfile
|   |-- start.sh
|   `-- tasks.py
`-- docker-compose.yml

Flask-app / app.py:

from flask import Flask
from flask_restful import Api, Resource

from celery import Celery

celery = Celery(
                'tasks',
                broker='redis://redis:6379',
                backend='redis://redis:6379'
)

app = Flask(__name__)
api = Api(app)

class add_zahl(Resource):
    def get(self):
        zahl = 54
        task = celery.send_task('mytasks.add', args=[zahl])

        return {'message': f"Prozess {task.id} gestartet, input {zahl}"}, 200

api.add_resource(add_zahl, "/add")

if __name__ == '__main__':
    app.run(host="0.0.0.0", debug=True)

工人: task.py

from celery import Celery
import requests
import time
import os
from dotenv import load_dotenv

basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))

celery = Celery(
                'tasks',
                broker='redis://redis:6379',
                backend='redis://redis:6379'
)

@celery.task(name='mytasks.add')
def send_simple_message(zahl):
    time.sleep(5)
    result = zahl * zahl
    return result

if __name__ == '__main__':
    celery.start()

Dockerfile:

FROM python:3.6-slim

RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt

COPY . /worker/

COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd

COPY celeryd.conf /etc/default/celeryd
RUN chown root:root /etc/default/celeryd

RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery

RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery

RUN chmod u+x /worker/start.sh
ENTRYPOINT /worker/start.sh

celeryd.conf:

CELERYD_NODES="worker1"
CELERY_BIN="/worker/tasks"
CELERY_APP="worker.tasks:celery"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1

start.sh

#!/bin/sh
exec celery multi start worker1 -A worker --app=worker.tasks:celery 

芹菜: https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd

Docker检查日志:

Docker inspect 50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765

[
    {
        "Id": "50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765",
        "Created": "2019-02-21T23:20:15.017156266Z",
        "Path": "/bin/sh",
        "Args": [
            "-c",
            "/worker/start.sh"
        ],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-02-21T23:20:40.375566345Z",
            "FinishedAt": "2019-02-21T23:20:41.162618701Z"
        },

很抱歉收到“垃圾邮件”,但我无法解决此问题。

编辑 编辑 编辑

我添加了提到的CMD行,现在工作程序无法启动。我正在努力寻找解决方案。有什么提示吗?谢谢大家。

FROM python:3.6-slim

RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt

COPY . /worker/

COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd

COPY celeryd.conf /etc/default/celeryd
RUN chown -R root:root /etc/default/celeryd

RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery

RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery

CMD ["celery", "worker", "--app=worker.tasks:celery"]

2 个答案:

答案 0 :(得分:2)

无论何时Docker容器的入口点退出(或者,如果您没有入口点,则使用其主命令),该容器都会退出。结果是,容器中的主进程不能是诸如celery multi之类的命令,它会产生一些后台工作并立即返回;您需要使用像celery worker这样的在前台运行的命令。

我可能会将Dockerfile中的最后两行替换为:

CMD ["celery", "worker", "--app=worker.tasks:celery"]

保留入口点脚本并将其更改为等效的前台celery worker命令也应完成该工作。

答案 1 :(得分:1)

您还可以使用超级用户来管理您的芹菜工作者。如有监督,奖金还会监督并重新启动您的工作人员。下面是我根据您的情况提取的工作图像中的示例...

文件supervised.conf

[supervisord]
nodaemon=true

[program:celery]
command=celery worker -A proj --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000

文件start.sh

#!/bin/bash
set -e
exec /usr/bin/supervisord -c /etc/supervisor/supervisord.conf

文件Dockerfile

# Your other Dockerfile content here

ENTRYPOINT ["/entrypoint.sh"]
CMD ["/start.sh"]