Django celery beat在Elastic Beanstalk上看不到定期任务

时间:2018-07-10 12:45:07

标签: python django celery elastic-beanstalk celerybeat

我已经在EB上配置了芹菜工人和芹菜节拍。在部署和celery worker正常运行期间,日志中没有错误,但是看不到定期任务。在本地计算机上,一切正常进行。

这是我的芹菜配置文件

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash

      # Create required directories
      sudo mkdir -p /var/log/celery/
      sudo mkdir -p /var/run/celery/

      # Create group called 'celery'
      sudo groupadd -f celery
      # add the user 'celery' if it doesn't exist and add it to the group with same name
      id -u celery &>/dev/null || sudo useradd -g celery celery
      # add permissions to the celery user for r+w to the folders just created
      sudo chown -R celery:celery /var/log/celery/
      sudo chmod -R 777 /var/log/celery/
      sudo chown -R celery:celery /var/run/celery/
      sudo chmod -R 777 /var/run/celery/

      # Get django environment variables
      celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
      celeryenv=${celeryenv%?}

      # Create CELERY configuration script
      celeryconf="[program:celeryd]
      directory=/opt/python/current/app
      ; Set full path to celery program if using virtualenv
      command=/opt/python/run/venv/bin/celery worker -A config.celery:app --loglevel=INFO --logfile="/var/log/celery/celery_worker.log" --pidfile="/var/run/celery/celery_worker_pid.pid"

      user=celery
      numprocs=1
      stdout_logfile=/var/log/std_celery_worker.log
      stderr_logfile=/var/log/std_celery_worker_errors.log
      autostart=true
      autorestart=true
      startsecs=10
      startretries=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 60

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=998

      environment=$celeryenv"


      # Create CELERY BEAT configuraiton script
      celerybeatconf="[program:celerybeat]
      directory=/opt/python/current/app
      ; Set full path to celery program if using virtualenv
      command=/opt/python/run/venv/bin/celery beat -A config.celery:app --loglevel=INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler --logfile="/var/log/celery/celery_beat.log" --pidfile="/var/run/celery/celery_beat_pid.pid"

      user=celery
      numprocs=1
      stdout_logfile=/var/log/std_celery_beat.log
      stderr_logfile=/var/log/std_celery_beat_errors.log
      autostart=true
      autorestart=true
      startsecs=10
      startretries=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 60

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=999

      environment=$celeryenv"

      # Create the celery supervisord conf script
      echo "$celeryconf" | tee /opt/python/etc/celery.conf
      echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

      # Add configuration script to supervisord conf (if not there already)
      if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
        then
          echo "[include]" | tee -a /opt/python/etc/supervisord.conf
          echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Enable supervisor to listen for HTTP/XML-RPC requests.
      # supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
      # Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
      if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf
        then
          echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf
          echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Reread the supervisord config
      supervisorctl -c /opt/python/etc/supervisord.conf reread

      # Update supervisord in cache without restarting all services
      supervisorctl -c /opt/python/etc/supervisord.conf update

      # Start/Restart celeryd through supervisord
      supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
      supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat

commands:
  01_kill_other_beats:
    command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"
    ignoreErrors: true
  02_restart_beat:
    command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"
    leader_only: true
  03_upgrade_pip_global:
    command: "if test -e /usr/bin/pip; then sudo /usr/bin/pip install --upgrade pip; fi"
  04_upgrade_pip_global:
    command: "if test -e /usr/local/bin/pip; then sudo /usr/local/bin/pip install --upgrade pip; fi"
  05_upgrade_pip_for_venv:
    command: "if test -e /opt/python/run/venv/bin/pip; then sudo /opt/python/run/venv/bin/pip install --upgrade pip; fi"

有人可以说错误在哪里吗?

我开始这样的定期任务:

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    pass

更新: 监督日志

2018-07-10 12:56:18,683 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:18,691 INFO spawned: 'celerybeat' with pid 1626
2018-07-10 12:56:19,181 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:20,187 INFO spawned: 'celerybeat' with pid 1631
2018-07-10 12:56:30,200 INFO success: celerybeat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 12:56:30,466 INFO stopped: celeryd (terminated by SIGTERM)
2018-07-10 12:56:31,472 INFO spawned: 'celeryd' with pid 1638
2018-07-10 12:56:41,486 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 13:28:32,572 CRIT Supervisor running as root (no user in config file)
2018-07-10 13:28:32,573 WARN No file matches via include "/opt/python/etc/uwsgi.conf"
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celery.conf" during parsing
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celerybeat.conf" during parsing
2018-07-10 13:28:32,591 INFO RPC interface 'supervisor' initialized
2018-07-10 13:28:32,591 CRIT Server 'inet_http_server' running without any HTTP authentication checking

1 个答案:

答案 0 :(得分:0)

问题的根源是尝试设置定期任务时的导入:

celery.py

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from my_app.tasks.task1 import some_task
    sender.some_task(
        60.0, some_task.s(), name='call every 60 seconds'
    )

解决方案是在celery应用中使用任务:

celery.py

@app.task
def temp_task():
    from my_app.tasks.task1 import some_task
    some_task()

因此设置的定期任务将如下所示

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from my_app.tasks.task1 import some_task
    sender.some_task(
        60.0, temp_task.s(), name='call every 60 seconds'
    )

由于没有错误日志并且常规日志为空,因为确实没有启动芹菜,因此很难找到问题的根源。