我在Linux服务器上使用docker-compose运行安装程序。两天前,我在设置中添加了gunicorn + nginx。不幸的是,所有启动celery任务的rest api端点都停止了工作(返回未找到502网关)。
当我尝试在calculate shortest path
上发送启动芹菜任务的邮寄表格时,将返回502网关。
Issue:
Summary
URL: http://192.168.0.150:8001/tspweb/calculate_shortest_paths/
Status: 502 Bad Gateway
Source: Network
Address: 192.168.0.150:8001
这是django容器和nginx容器的日志。
tspoptimization | [2018-10-31 07:26:30 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:15)
nginx_1 | 2018/10/31 07:26:30 [error] 8#8: *9 upstream prematurely closed connection while reading response header from upstream, client: 192.168.0.103, server: localhost, request: "POST /tspweb/calculate_shortest_paths/ HTTP/1.1", upstream: "http://192.168.128.2:8001/tspweb/calculate_shortest_paths/", host: "192.168.0.150:8001", referrer: "http://192.168.0.150:8001/tspweb/warehouse_list.html"
nginx_1 | 192.168.0.103 - - [31/Oct/2018:07:26:30 +0000] "POST /tspweb/calculate_shortest_paths/ HTTP/1.1" 502 157 "http://192.168.0.150:8001/tspweb/warehouse_list.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" "-
在添加gunicorn + nginx之前,一切工作都很好(在没有这两个功能的本地系统上,它工作得很好)。这意味着这不是超时问题。
我怀疑nginx + gunicorn不会将POST请求从表单“重定向”到芹菜。我开始登录文件时是celery,这是celery日志文件的内容:
root@4fb6e101a85b:/opt/services/djangoapp/src# cat logmato.log
[2018-10-31 07:12:04,400: INFO/MainProcess] Connected to. redis://redis:6379//
[2018-10-31 07:12:04,409: INFO/MainProcess] mingle: searching for neighbors
[2018-10-31 07:12:05,430: INFO/MainProcess] mingle: all alone
[2018-10-31 07:12:05,446: WARNING/MainProcess] /usr/local/lib/python3.6/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2018-10-31 07:12:05,446: INFO/MainProcess] celery@4fb6e101a85b ready.
[2018-10-31 07:14:07,802: INFO/MainProcess] Connected to redis://redis:6379//
[2018-10-31 07:14:07,813: INFO/MainProcess] mingle: searching for neighbors
[2018-10-31 07:14:08,835: INFO/MainProcess] mingle: all alone
[2018-10-31 07:14:08,853: WARNING/MainProcess]/usr/local/lib/python3.6/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never
2018-10-31 07:14:08,853: INFO/MainProcess] celery@4fb6e101a85b ready.
从日志中可以看到,芹菜工作者没有启动一项任务,这意味着问题不在于芹菜或Redis,而在于(nginx-gunicorn-django-celery)之间的某个联系。
这是我的docker-compose文件:
version: '3'
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
networks: # <-- connect to the bridge
- database_network
redis:
image: "redis:alpine"
expose:
- "5672"
django:
build: .
restart: always
container_name: tspoptimization
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- db
- redis
networks:
- nginx_network
- database_network
celery:
build: .
command: celery -A tspoptimization worker -l info
volumes:
- .:/code
depends_on:
- db
- redis
- django
links:
- redis
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- django
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
database_network: # <-- add the bridge
driver: bridge
volumes:
postgres_data:
static_volume:
media_volume:
这是nginx conf:
upstream hello_server {
server django:8001;
}
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://hello_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /tspweb/static/ {
alias /opt/services/djangoapp/src/tspweb/static/;
}
location /tspweb/media/ {
alias /opt/services/djangoapp/src/tspweb/media/;
}
}
我的django设置:
DEBUG = True
ALLOWED_HOSTS = ['*']
CELERY_BROKER_URL = 'redis://redis:6379'
STATIC_URL = '/tspweb/static/'
STATIC_ROOT = os.path.join(os.path.dirname(os.path.dirname(BASE_DIR)), '/tspweb/static')
MEDIA_URL = '/tspweb/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'tspweb/media')
最后是dockerfile:
FROM python:3.6
RUN mkdir -p /opt/services/djangoapp/src
WORKDIR /opt/services/djangoapp/src
ADD . /opt/services/djangoapp/src
EXPOSE 8001
RUN pip install -r requirements.txt
RUN python manage.py collectstatic --no-input
CMD ["gunicorn", "--bind", ":8001", "tspoptimization.wsgi"]
有任何帮助解决此问题的方法吗?
答案 0 :(得分:1)
我一个人解决了这个问题,答案是这样的:
Redis&Celery必须与docker使用nginx-network和db-network创建的同一虚拟网络相同。
这是正在工作的docker-compose文件,任务已正确发送:
version: '3'
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
networks: # <-- connect to the bridge
- database_network
redis:
image: "redis:latest"
expose:
- "5672"
networks:
- database_network
- nginx_network
django:
build: .
restart: always
container_name: tspoptimization
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- db
- redis
networks:
- nginx_network
- database_network
celery:
build: .
command: celery -A tspoptimization worker -l info
volumes:
- .:/code
depends_on:
- db
- redis
- django
links:
- redis
networks:
- nginx_network
- database_network
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- django
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
database_network: # <-- add the bridge
driver: bridge
volumes:
postgres_data:
static_volume:
media_volume:
实际上,我不知道是否应该采用这种方式,但是我不是devop的专业人员,所以至少现在是这样。