我无法弄清楚如何以模块化方式定义celery任务(即不同一文件中的所有任务)并正确注册它们以供异步使用。我已经尝试了所有我能想到的选择:
app.tasks.register(Task1())
celery worker
命令的应用程序。 无论我做什么,我总是最终得到任务注册表抛出的'KeyError',但是当用apply_async
执行时仅。同步版本总能正常工作。
如果有人能给我一些暗示我应该采取什么措施来解决这个问题,请分享。
这是一个最小的例子:
__init__.py
task.py
__init__.py
__init__.py
celery_app.py
start.sh
test.py
minimal.task1.task
# -*- coding: utf-8 -*-
from celery import Task
from minimal2.celery_app import app
class Task1(Task):
name = ""
def run(self, number):
return number / 2.0
app.tasks.register(Task1())
minimal.task2.task
# -*- coding: utf-8 -*-
from celery import Task
from minimal2.celery_app import app
class Task2(Task):
name = "minimal2.task2.task.Task2"
def run(self, number):
return number * number
app.tasks.register(Task2())
minimal2.celery_app
# -*- coding: utf-8 -*-
from celery import Celery
app = Celery('minimal', backend='amqp', broker='amqp://')
app.autodiscover_tasks(['task1', 'task2'], 'task')
minimal2 / start.sh
#!/bin/bash
set -e
start_celery_service() {
name=$1
pid_file_path="$(pwd)/${name}.pid"
if [ -e "${pid_file_path}" ] ; then
kill $(cat ${pid_file_path}) && :
sleep 3.0
rm -f "${pid_file_path}" # just in case the file was stale
fi
celery -A minimal2.celery_app.app worker -l DEBUG --pidfile=${pid_file_path} --logfile="$(pwd)/${name}.log" &
sleep 3.0
}
prev_dir=$(pwd)
cd "$(dirname "$0")"
cd ../
rabbitmq-server &
start_celery_service "worker1"
cd $prev_dir
测试
from minimal2.task1.task import Task1
print Task1().apply(args=[], kwargs={'number':2}).get()
> 1.0
print Task1().apply_async(args=[], kwargs={'number':2}).get() # (first time: never comes back -> hitting ctrl-c)
print Task1().apply_async(args=[], kwargs={'number':2}).get() # second time
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/celery/result.py", line 194, in get
on_message=on_message,
File "/usr/local/lib/python2.7/site-packages/celery/backends/base.py", line 470, in wait_for_pending
return result.maybe_throw(propagate=propagate, callback=callback)
File "/usr/local/lib/python2.7/site-packages/celery/result.py", line 299, in maybe_throw
self.throw(value, self._to_remote_traceback(tb))
File "/usr/local/lib/python2.7/site-packages/celery/result.py", line 292, in throw
self.on_ready.throw(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/vine/promises.py", line 217, in throw
reraise(type(exc), exc, tb)
File "<string>", line 1, in reraise
celery.backends.base.NotRegistered: ''
#.. same spiel with Task2:
#..
> celery.backends.base.NotRegistered: 'minimal2.task2.task.Task2'
#.. same if I do name = __name__ in Task2:
#..
> celery.backends.base.NotRegistered: 'minimal2.task2.task'
# autodiscover had no effect
我在Docker容器和macOS中的Ubuntu中都有相同的行为,两者都在Pypy上的最新celery版本中:
celery report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.13
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
答案 0 :(得分:2)
如果我正确理解了这个问题,您可以使用include
参数来创建您的芹菜应用。它将注册include
参数中提到的模块中的所有任务。例如:
celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'],
CELERY_RESULT_BACKEND=app.config['CELERY_BROKER_URL'],
include=['minimal.task1', 'minimal.task2'])
按问题编辑海报:此外,为了获得正确的导入命名,任务类&#39; name属性需要设置如下:
class Task1(Task):
name = __name__
基本上,任务注册时name
的值需要与客户端导入任务的名称完全匹配。