带有Scoped Session错误的SqlAlchemy + Celery

时间:2017-05-12 18:58:34

标签: python sqlalchemy celery

我正在尝试运行celery_beat作业,启动一系列并行作业,但我收到错误:@Override public Query getQuery(DatabaseReference databaseReference) { // All my posts order by category Intent intent=getActivity().getIntent(); mPostKey = intent.getStringExtra(EXTRA_CATEGORYV); Query viewCategory = databaseReference.getDatabase().getReference("posts"); return viewCategory.orderByChild("category").equalTo(mPostKey); }

以下是我的相关文件。请注意,我使用的是scoped_session:

ResourceClosedError: This result object does not return rows. It has been closed automatically.
#db.py
engine = create_engine(SETTINGS['DATABASE_URL'], pool_recycle=3600, pool_size=10)
db_session = scoped_session(sessionmaker(
    autocommit=False, autoflush=False, bind=engine))

然后当我尝试启动#tasks.py from db import db_session @app.task def db_task(pid): db_session() r = db_session.query(exists().where(RSSSummary.id == pid)).scalar() print pid, r db_session.remove() @app.task def sched_test(): ids =[0, 1] db_task.delay(ids[0]) db_task.delay(ids[1]) 时,就像这样:

sched_test

>>> tasks.sched_test.delay()

DatabaseError: (psycopg2.DatabaseError) error with status PGRES_TUPLES_OK and no message from the libpq

我相信我正在使用scoped_sessions。

有什么建议吗?

1 个答案:

答案 0 :(得分:0)

我有相同的错误以及类似的错误:

DatabaseError: server sent data ("D" message) without prior row description ("T" message)
lost synchronization with server: got message type "�", length -1244613424

DatabaseError: lost synchronization with server: got message type "0", length 842674226

事实证明,这是因为我的Celery worker进程正在共享SQLAlchemy连接。 SQLAlchemy docs解决这个问题:

  

至关重要的是,使用连接池时以及扩展时   使用通过create_engine()创建的引擎   连接不会共享给分叉的进程。 TCP连接是   表示为文件描述符,通常跨进程工作   边界,这将导致并发访问文件   代表两个或多个完全独立的Python的描述符   解释器状态。

我通过使用Celery事件在工作程序启动时使池中的所有现有连接无效来解决此问题:

from celery.signals import worker_process_init

@worker_process_init.connect
def prep_db_pool(**kwargs):
    """
        When Celery fork's the parent process, the db engine & connection pool is included in that.
        But, the db connections should not be shared across processes, so we tell the engine
        to dispose of all existing connections, which will cause new ones to be opend in the child
        processes as needed.
        More info: https://docs.sqlalchemy.org/en/latest/core/pooling.html#using-connection-pools-with-multiprocessing
    """
    # The "with" here is for a flask app using Flask-SQLAlchemy.  If you don't 
    # have a flask app, just remove the "with" here and call .dispose()
    # on your SQLAlchemy db engine.
    with some_flask_app.app_context():
        db.engine.dispose()