使用应用程序工厂模式将Celery与Flask集成:最大递归深度误差

时间:2019-02-24 18:42:01

标签: python flask celery

我正在使用cookiecutter Flask模板工作,该模板使用应用程序工厂模式。我让Celery处理不使用应用程序上下文的任务,但是我的一项任务确实需要了解它。它进行数据库查询并更新数据库对象。现在,我没有循环导入错误(尽管我曾尝试过其他方法),但是没有最大递归深度错误。

我向this blog post咨询了如何在应用程序工厂模式下使用Celery,并且我正尝试紧跟this Stack Overflow answer,因为它显然也具有从cookiecutter Flask派生的结构。

我的项目结构的相关部分:

cookiecutter_mbam
│   celeryconfig.py   
│
└───cookiecutter_mbam
   |   __init__.py
   │   app.py
   │   run_celery.py
   │
   └───utility
   |       celery_utils.py
   |
   └───derivation 
   |       tasks.py  
   | 
   └───storage
   |       tasks.py    
   |
   └───xnat
          tasks.py

__init__.py

"""Main application package."""

from celery import Celery

celery = Celery('cookiecutter_mbam', config_source='cookiecutter_mbam.celeryconfig')

app.py的相关部分:

from cookiecutter_mbam import celery

def create_app(config_object='cookiecutter_mbam.settings'):
    """An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.

    :param config_object: The configuration object to use.
    """
    app = Flask(__name__.split('.')[0])
    app.config.from_object(config_object)
    init_celery(app, celery=celery)
    register_extensions(app)
    # ...
    return app

run_celery.py

from cookiecutter_mbam.app import create_app
from cookiecutter_mbam import celery
from cookiecutter_mbam.utility.celery_utils import init_celery

app = create_app(config_object='cookiecutter_mbam.settings')
init_celery(app, celery)

celeryconfig.py

broker_url = 'redis://localhost:6379'
result_backend = 'redis://localhost:6379'

task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
enable_utc = True

imports = {'cookiecutter_mbam.xnat.tasks', 'cookiecutter_mbam.storage.tasks', 'cookiecutter_mbam.derivation.tasks'}

celery_utils.py的相关部分:

def init_celery(app, celery):
    """Add flask app context to celery.Task"""

    class ContextTask(celery.Task):
        def __call__(self, *args, **kwargs):
            with app.app_context():
                return self.run(*args, **kwargs)

    celery.Task = ContextTask
    return celery

当我尝试使用celery -A cookiecutter_mbam.run_celery:celery worker启动工作程序时,出现RecursionError: maximum recursion depth exceeded while calling a Python object错误。 (我还尝试了其他几种方法来调用工作程序,但都具有相同的错误。)以下是堆栈跟踪的摘录:

Traceback (most recent call last):
  File "/Users/katie/anaconda/bin/celery", line 11, in <module>
    sys.exit(main())
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
    _main()
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
    cmd.execute_from_commandline(argv)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
    super(CeleryCommand, self).execute_from_commandline(argv)))
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
    return self.handle_argv(self.prog_name, argv[1:])
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in handle_argv
    return self.execute(command, argv)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 420, in execute
    ).run_from_argv(self.prog_name, argv[1:], command=argv[0])
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
    *self.parse_options(prog_name, argv, command))
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 398, in parse_options
    self.parser = self.create_parser(prog_name, command)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 414, in create_parser
    self.add_arguments(parser)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 277, in add_arguments
    default=conf.worker_state_db,
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 429, in __getitem__
    return getitem(k)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 278, in __getitem__
    return mapping[_key]
  File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
    if key in self.data:
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]
  File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
    if key in self.data:
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]

我了解此错误的基本含义-某种东西在不断地呼唤自己。也许create_app。但是我不明白为什么,而且我也不知道如何调试它。

当我尝试加载我的网站时,我也得到了这个信息:

  File "~/cookiecutter_mbam/cookiecutter_mbam/xnat/tasks.py", line 14, in <module>
    @celery.task
AttributeError: module 'cookiecutter_mbam.celery' has no attribute 'task'

使用here中所述的make_celery方法时,我没有遇到此问题,但是当您需要任务来访问应用程序上下文时,该方法会产生循环导入问题。有关如何使用Cookiecutter Flask模板正确执行此操作的指针将不胜感激。

2 个答案:

答案 0 :(得分:0)

我对使Flask应用程序可用于celery的那部分代码感到怀疑。通过直接转到run(),它跳过了一些基本代码。 (请参见https://github.com/celery/celery/blob/master/celery/app/task.py#L387

尝试调用继承的__call__。这是我(正在运行)的一个应用程序的摘录。

# Arrange for tasks to have access to the Flask app
TaskBase = celery.Task
class ContextTask(TaskBase):
    def __call__(self, *args, **kwargs):
        with app.app_context():
            return TaskBase.__call__(self, *args, **kwargs)  ## << here
celery.Task = ContextTask

我也看不到您在哪里创建Celery实例并对其进行配置。我想你有

celery = Celery(__name__)

然后需要

celery.config_from_object(...)

init_celery()内的某个地方

答案 1 :(得分:0)

此问题已解决。我将configcelery.py放在错误的位置。我需要将其移动到package目录,而不是父仓库目录。放错位置的配置文件而不是引起“我找不到该文件”类型的错误,而是导致无限递归,这是令人难以置信的,不直观的。但是至少我终于看到并纠正了它。