Python Redis队列(rq)-如何避免为每个作业预加载ML模型?

时间:2018-08-30 14:01:50

标签: python redis python-rq

我想使用rq将我的ml预测排入队列。示例代码(伪伪):

predict.py

import tensorflow as tf

def predict_stuff(foo):
    model = tf.load_model()
    result = model.predict(foo)
    return result

app.py

from rq import Queue
from redis import Redis
from predict import predict_stuff

q = Queue(connection=Redis())
for foo in baz:
    job = q.enqueue(predict_stuff, foo)

worker.py

import sys
from rq import Connection, Worker

# Preload libraries
import tensorflow as tf

with Connection():
    qs = sys.argv[1:] or ['default']

    w = Worker(qs)
    w.work()

我已阅读rq文档,其中解释了可以预加载库以避免每次运行作业时将其导入(因此在示例代码中,我在工作程序代码中导入了tensorflow)。但是,我也想从predict_stuff转移模型加载,以避免每次工作者运行任务时都加载模型。我该怎么办?

2 个答案:

答案 0 :(得分:1)

最后,我还没有弄清楚如何使用python-rq做到这一点。我搬到了芹菜的位置,就像这样:

app.py

from tasks import predict_stuff

for foo in baz:
    task = predict_stuff.delay(foo)

tasks.py

import tensorflow as tf
from celery import Celery
from celery.signals import worker_process_init

cel_app = Celery('tasks')
model = None

@worker_process_init.connect()
def on_worker_init(**_):
    global model
    model = tf.load_model()

@cel_app.task(name='predict_stuff')
def predict_stuff(foo):
    result = model.predict(foo)
    return result

答案 1 :(得分:1)

我不确定这是否有帮助,但是请按照此处的示例进行操作:

https://github.com/rq/rq/issues/720

您可以共享模型,而不是共享连接池。

伪代码:

import tensorflow as tf

from rq import Worker as _Worker
from rq.local import LocalStack

_model_stack = LocalStack()

def get_model():
    """Get Model."""
    m = _model_stack.top
    try:
        assert m
    except AssertionError:
        raise('Run outside of worker context')
    return m

class Worker(_Worker):
    """Worker Class."""

    def work(self, burst=False, logging_level='WARN'):
        """Work."""
        _model_stack.push(tf.load_model())
        return super().work(burst, logging_level)

def predict_stuff_job(foo):
    model = get_model()
    result = model.predict(foo)
    return result

对于写的“全局”文件阅读器,我使用与此类似的内容。将实例加载到LocalStack中,并让工作程序读取堆栈。