在Celery任务中使用多处理并发机制

时间:2016-03-24 06:42:18

标签: python multiprocessing celery

我正在尝试与只能接受单个TCP连接(内存限制)的设备进行交互,因此只是为每个工作线程启动连接不是一个选项,因为它与正常的客户端 - 服务器情况一样,例如数据库连接。

我尝试使用可在线程之间全局访问的Multiprocessing Manager dict,格式为:

clients{(address, port): (connection_obj, multiprocessing.Manager.RLock)}

像这样的任务:

from celery import shared_task
from .celery import manager, clients

@shared_task
def send_command(controller, commandname, args):
    """Send a command to the controller."""
    # Create client connection if one does not exist.
    conn = None
    addr, port = controller
    if controller not in clients:
        conn = Client(addr, port)
        conn.connect()
        lock = manager.RLock()
        clients[controller] = (conn, lock,)
        print("New controller connection to %s:%s" % (addr, port,))
    else:
        conn, lock = clients[controller]

    try:
        f = getattr(conn, commandname) # See if connection.commandname() exists.
    except Exception:
        raise Exception("command: %s not known." % (commandname))

    with lock:
        res = f(*args)
        return res

但是,任务将因序列化错误而失败,例如:

_pickle.PicklingError: Can't pickle <class '_thread.lock'>: attribute lookup lock on _thread failed

即使我没有用不可序列化的值调用任务,并且任务没有尝试返回一个不可序列化的值,Celery似乎痴迷于尝试序列化这个全局对象?

我错过了什么?您将如何在Celery任务中使用客户端设备连接线程安全且可在线程之间访问?示例代码?

3 个答案:

答案 0 :(得分:0)

 ...
self._send_bytes(ForkingPickler.dumps(obj))
 File "/usr/lib64/python3.4/multiprocessing/reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class '_thread.lock'>: attribute lookup lock on _thread failed

环顾互联网后,我意识到自己可能错过了追溯中的重要内容。在仔细观察回溯之后,我意识到Celery不是试图腌制连接对象而是使用Multiprocessing.reduction。减少用于一边序列化,另一边重组。

我有一些替代方法可以解决这个问题 - 但是他们中没有一个真正做到我原来想要的只是借用客户端库连接对象并使用它,这对于Multiprocessing来说是不可能的。 prefork的。

答案 1 :(得分:0)

如何使用Redis实现分布式锁管理器? Redis python客户端内置了锁定功能。另请参阅redis.io上的this doc。即使你正在使用RabbitMQ或其他经纪人,Redis也非常轻量级。

例如,作为装饰者:

from functools import wraps

def device_lock(block=True):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            return_value = None
            have_lock = False
            lock = redisconn.lock('locks.device', timeout=2, sleep=0.01)
            try:
                have_lock = lock.acquire(blocking=block)
                if have_lock:
                    return_value = func(*args, **kwargs)
            finally:
                if have_lock:
                    lock.release()
            return return_value
        return wrapper
    return decorator

@shared_task
@device_lock
def send_command(controller, commandname, args):
    """Send a command to the controller."""
    ...

您还可以使用Celery任务指南中的this approach

from celery import task
from celery.utils.log import get_task_logger
from django.core.cache import cache
from hashlib import md5
from djangofeeds.models import Feed

logger = get_task_logger(__name__)

LOCK_EXPIRE = 60 * 5 # Lock expires in 5 minutes

@task(bind=True)
def import_feed(self, feed_url):
    # The cache key consists of the task name and the MD5 digest
    # of the feed URL.
    feed_url_hexdigest = md5(feed_url).hexdigest()
    lock_id = '{0}-lock-{1}'.format(self.name, feed_url_hexdigest)

    # cache.add fails if the key already exists
    acquire_lock = lambda: cache.add(lock_id, 'true', LOCK_EXPIRE)
    # memcache delete is very slow, but we have to use it to take
    # advantage of using add() for atomic locking
    release_lock = lambda: cache.delete(lock_id)

    logger.debug('Importing feed: %s', feed_url)
    if acquire_lock():
        try:
            feed = Feed.objects.import_feed(feed_url)
        finally:
            release_lock()
        return feed.url

    logger.debug(
        'Feed %s is already being imported by another worker', feed_url)

答案 2 :(得分:0)

您是否尝试过使用gevent或eventlet芹菜工作者而不是进程和线程?在这种情况下,您将能够使用global var或threading.local()来共享连接对象。