应用程序包括: -Django -Redis - 芹菜 -码头工人 -Postgres
在将项目合并到docker之前,一切都工作顺利且良好,但是一旦将其移入容器,就会开始发生错误。 刚开始时一切正常,但过了一会我却收到以下错误:
celery-beat_1 | ERROR: Pidfile (celerybeat.pid) already exists.
我已经为它苦苦挣扎了一段时间,但是现在我真的放弃了。我不知道这是怎么回事。
Dockerfile:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY /scripts/startup/entrypoint.sh entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
COPY . /opt/services/djangoapp/src
RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;
RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py
RUN cd app && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]
docker-compose.yml:
version: '3'
services:
djangoapp:
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
- .:/code
restart: always
networks:
- nginx_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
- redis_network
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- migration
- redis
# base redis server
redis:
image: "redis:alpine"
restart: always
ports:
- "6379:6379"
networks:
- redis_network
volumes:
- redis_data:/data
# celery worker
celery:
build: .
command: >
bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
celery-beat:
build: .
command: >
bash -c "cd app && celery -A example beat"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
# migrations needed for proper db functioning
migration:
build: .
command: >
bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
networks:
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
restart: always
depends_on:
- djangoapp
networks:
- nginx_network
database1: # comment when testing
image: postgres:10 # comment when testing
env_file: # comment when testing
- config/db/database1_env # comment when testing
networks: # comment when testing
- database1_network # comment when testing
volumes: # comment when testing
- database1_volume:/var/lib/postgresql/data # comment when testing
# test_database1: # uncomment when testing
# image: postgres:10 # uncomment when testing
# env_file: # uncomment when testing
# - config/db/test_database1_env # uncomment when testing
# networks: # uncomment when testing
# - test_database1_network # uncomment when testing
# volumes: # uncomment when testing
# - test_database1_volume:/var/lib/postgresql/data # uncomment when testing
networks:
nginx_network:
driver: bridge
database1_network: # comment when testing
driver: bridge # comment when testing
# test_database1_network: # uncomment when testing
# driver: bridge # uncomment when testing
redis_network:
driver: bridge
volumes:
database1_volume: # comment when testing
# test_database1_volume: # uncomment when testing
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
static_local_volume:
media_local_volume:
redis_data:
请忽略“ test_database1_volume”,因为它仅用于测试目的。
答案 0 :(得分:3)
我相信您的项目目录./
中有一个pidfile,然后在您运行容器时将其挂载。
(因此RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;
无效)。
您可以使用celery --pidfile=/opt/celeryd.pid
指定未安装的路径,以使其在主机上不成为镜像。
答案 1 :(得分:3)
尽管一点也不专业,但我发现添加:
celerybeat.pid
修正了我的.dockerignore
文件所说的问题。
答案 2 :(得分:1)
另一种解决方案(取自forked stackblitz)是使用--pidfile =(无路径)根本不创建pidfile。与思雨上面的答案一样。
答案 3 :(得分:0)
以其他方式创建django命令celery_kill.py
import shlex
import subprocess
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
kill_worker_cmd = 'pkill -9 celery'
subprocess.call(shlex.split(kill_worker_cmd))
docker-compose.yml:
celery:
build: ./src
restart: always
command: celery -A project worker -l info
volumes:
- ./src:/var/lib/celery/data/
depends_on:
- db
- redis
- app
celery-beat:
build: ./src
restart: always
command: celery -A project beat -l info --pidfile=/tmp/celeryd.pid
volumes:
- ./src:/var/lib/beat/data/
depends_on:
- db
- redis
- app
和Makefile:
run:
docker-compose up -d --force-recreate
docker-compose exec app python manage.py celery_kill
docker-compose restart
docker-compose exec app python manage.py migrate
答案 4 :(得分:0)
此错误的原因是docker容器在没有正常的Celery停止过程的情况下停止了。 解决方案很简单。开始之前先停止Celery。
解决方案1。如下编写celery start命令(例如docker-entrypoint.sh,...)
celery multi stopwait w1 -A myproject
&& rm -f /var/run/celery/w1.pid # remove stale pidfile
&& celery multi start w1 -A myproject-l info --pidfile=/var/run/celery/w1.pid
解决方案2。(不推荐)
始终在“ docker-compose up”之前运行“ docker-compose down”。