Python SQLAlchemy更新Postgres记录

时间:2014-11-14 18:49:06

标签: python postgresql sqlalchemy

我尝试使用multiprocessing模块更新数据库上的行(异步方式)。我的代码有一个简单的函数create_member,它在表上插入一些数据,然后创建一个可能会改变这些数据的进程。问题是传递给async_create_member的会话正在关闭数据库连接,而下一个请求我得到了psycopg的错误:

(Interface Error) connection already closed 

以下是代码:

def create_member(self, data):
    member = self.entity(**data)
    self.session.add(member)
    for name in data:
        setattr(member, name, data[name])
    self.session.commit()
    self.session.close()
    if self.index.is_indexable:
        Process(target=self.async_create_member,
            args=(data, self.session)).start()
    return member

def async_create_member(self, data, session):
    ok, data = self.index.create(data)
    if ok:

        datacopy = data.copy()
        data.clear()
        data['document'] = datacopy['document']
        data['dt_idx'] = datacopy['dt_idx']
        stmt = update(self.entity.__table__).where(
            self.entity.__table__.c.id_doc == datacopy['id_doc'])\
            .values(**data)

        session.begin()
        session.execute(stmt)
        session.commit()
        session.close()

我可以通过在async_create_member上创建新的连接来解决这个问题,但这会在postgres上留下太多的idle次交易:

engine = create_new_engine()
conn = engine.connect()
conn.execute(stmt)
conn.close()

我现在该怎么办?有没有办法解决第一个代码?或者我应该继续使用create_new_engine功能创建新连接吗?我应该使用线程还是进程?

1 个答案:

答案 0 :(得分:-1)

您无法跨线程或进程重用会话。 Sessions aren't thread safe,并且不会跨进程干净地继承作为Session基础的连接。如果没有信息,那么您获得的错误消息是准确的:如果您在跨过程边界继承数据库连接后尝试使用它,则确实关闭了数据库连接。

在大多数情况下,是的,您应该在multiprocessing设置中为每个流程创建一个会话。

如果您的问题符合以下条件:

  • 您正在为每个对象进行大量CPU密集型处理
  • 数据库写入比较轻量级
  • 你想要使用很多进程(我在8+核心机器上执行此操作)

创建一个拥有会话的单个编写器进程并将对象传递给该进程可能是值得的。以下是它通常对我有用的方法(注意:不是为了运行代码):

import multiprocessing
from your_database_layer import create_new_session, WhateverType

work = multiprocessing.JoinableQueue()

def writer(commit_every = 50):
    global work
    session = create_new_session()
    counter = 0

    while True:
        item = work.get()
        if item is None:
            break

        session.add(item)
        counter += 1
        if counter % commit_every == 0:
            session.commit()

        work.task_done()

    # Last DB writes
    session.commit()

    # Mark the final None in the queue as complete
    work.task_done()
    return


def very_expensive_object_creation(data):
    global work
    very_expensive_object = WhateverType(**data)
    # Perform lots of computation
    work.put(very_expensive_object)
    return


def main():
    writer_process = multiprocessing.Process(target=writer)
    writer_process.start()

    # Create your pool that will feed the queue here, i.e.
    workers = multiprocessing.Pool()
    # Dispatch lots of work to very_expensive_object_creation in parallel here
    workers.map(very_expensive_object_creation, some_iterable_source_here)
    # --or-- in whatever other way floats your boat, such as
    workers.apply_async(very_expensive_object_creation, args=(some_data_1,))
    workers.apply_async(very_expensive_object_creation, args=(some_data_2,))
    # etc.

    # Signal that we won't dispatch any more work
    workers.close()

    # Wait for the creation work to be done
    workers.join()

    # Trigger the exit condition for the writer
    work.put(None)

    # Wait for the queue to be emptied
    work.join()

    return
相关问题