调用以下任务:
task__determine_order_details_processing_or_created_status.apply_async(
args=[order_record.Order_ID],
eta=datetime.now(GMT_timezone)+timedelta(minutes=1)
)
最终导致工人超时。看来该方法永远不会释放工人继续工作
web_1 | [2019-11-21 05:43:43 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1559)
web_1 | [2019-11-21 05:43:43 +0000] [1559] [INFO] Worker exiting (pid: 1559)
web_1 | [2019-11-21 05:43:43 +0000] [1636] [INFO] Booting worker with pid: 1636
与此同时,使用Django shell调用的同一命令创建了一个完全正常的Celerty任务:
celery_1 | [2019-11-21 05:47:06,500: INFO/MainProcess] Received task: task__determine_order_details_processing_or_created_status[f94708be-a0ab-4853-8785-a11c8c7ca9f1] ETA:[2019-11-21 05:48:06.304924+00:00]
docker-compose.yml:
web:
build: ./server
command: gunicorn server.wsgi:application --reload --limit-request-line 16376 --bind 0.0.0.0:8001
volumes:
- ./server:/usr/src
expose:
- 8001
env_file: .env.dev
links:
- memcached
depends_on:
- db_development_2
- redis
db_development_2:
restart: always
image: postgres:latest
volumes:
- postgres_development3:/var/lib/postgresql/volume/
env_file: .env.dev
logging:
driver: none
redis:
image: "redis:alpine"
restart: always
logging:
driver: none
celery:
build: ./server
command: celery -A server.celery worker -l info
env_file: .env.dev
volumes:
- ./server:/usr/src
depends_on:
- db_development_2
- redis
restart: always
celery-beat:
build: ./server
command: celery -A server.celery beat -l info
env_file: .env.dev
volumes:
- ./server:/usr/src
depends_on:
- db_development_2
- redis
restart: always
logging:
driver: none
答案 0 :(得分:0)
能否请您分享更多详细信息?
错误来自枪灰石吧?
您是否正在docker环境中运行它?芹菜放在其他容器上?
WSGI <-> YOUR_APP命令是什么样的?
示例:
gunicorn app.wsgi:tour_application -w 6 -b :8000 --timeout 120
您可以尝试更多超时,例如在120以上吗?