python多处理mssql游标

时间:2016-02-11 20:00:18

标签: python sql-server python-2.7 pyodbc python-multiprocessing

是否存在连接池或跨多个进程使用连接?

我正在尝试跨多个进程使用一个连接。这是代码(在python 2.7上运行,pyodbc)。

# Import custom python packages
import pathos.multiprocessing as mp
import pyodbc

class MyManagerClass(object):
    def __init__(self):
        self.conn = None
        self.result = []
    def connect_to_db(self):
        conn = pyodbc.connect("DSN=cpmeast;UID=dntcore;PWD=dntcorevs2")
        cursor = conn.cursor()
        self.conn = conn
        return cursor

    def read_data(self, *args):
        cursor = args[0][0]
        data = args[0][1]
        print 'Running query'
        cursor.execute("WAITFOR DELAY '00:00:02';select GETDATE(), '"+data+"';")
        self.result.append(cursor.fetchall())

def read_data(*args):
    print 'Running query', args
#     cursor.execute("WAITFOR DELAY '00:00:02';select GETDATE(), '"+data+"';")


def main():
    dbm = MyManagerClass()
    conn = pyodbc.connect("DSN=cpmeast;UID=dntcore;PWD=dntcorevs2")
    cursor = conn.cursor()

    pool = mp.ProcessingPool(4)
    for i in pool.imap(dbm.read_data, ((cursor, 'foo'), (cursor, 'bar'))):
        print i
    pool.close()
    pool.join()

    cursor.close();
    dbm.conn.close()

    print 'Result', dbm.result
    print 'Closed'

if __name__ == '__main__':
    main()

我收到以下错误:

Process PoolWorker-1:
Traceback (most recent call last):
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 227, in _bootstrap
    self.run()
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 85, in run
    self._target(*self._args, **self._kwargs)
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/pool.py", line 54, in worker
    for job, i, func, args, kwds in iter(inqueue.get, None):
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/queue.py", line 327, in get
    return recv()
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 209, in loads
    return load(file)
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 199, in load
    obj = pik.load()
  File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 858, in load
    dispatch[key](self)
  File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 1083, in load_newobj
    obj = cls.__new__(cls, *args)
TypeError: object.__new__(pyodbc.Cursor) is not safe, use pyodbc.Cursor.__new__()
Process PoolWorker-2:
Traceback (most recent call last):
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 227, in _bootstrap
    self.run()
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 85, in run
    self._target(*self._args, **self._kwargs)
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/pool.py", line 54, in worker
    for job, i, func, args, kwds in iter(inqueue.get, None):
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/queue.py", line 327, in get
    return recv()
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 209, in loads
    return load(file)
  File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 199, in load
    obj = pik.load()
  File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 858, in load
    dispatch[key](self)
  File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 1083, in load_newobj
    obj = cls.__new__(cls, *args)
TypeError: object.__new__(pyodbc.Cursor) is not safe, use pyodbc.Cursor.__new__()

1 个答案:

答案 0 :(得分:0)

问题在于Pickle阶段。 Pickle本质上不知道如何序列化连接。考虑:

import pickle
import pymssql
a = {'hello': 'world'}
server = 'server'
username = 'username'
password = 'password'
database = 'database'
conn = pymssql.connect(host=server,user=username,password=password,database=database)
with open('filename.pickle', 'wb') as handle:
    pickle.dump(conn, handle, protocol=pickle.HIGHEST_PROTOCOL)

with open('filename.pickle', 'rb') as handle:
    b = pickle.load(handle)
print(a == b)

这将导致以下错误消息:

Traceback (most recent call last):
  File "pickle_ex.py", line 10, in <module>
    pickle.dump(conn, handle, protocol=pickle.HIGHEST_PROTOCOL)
  File "stringsource", line 2, in _mssql.MSSQLConnection.__reduce_cython__
TypeError: no default __reduce__ due to non-trivial __cinit__

但是,如果您将conn中的a替换为pickle.dump,代码将运行并打印出True. 您也许可以在您的类中定义一个自定义的reduce方法,但考虑到这将如何导致临时表像全局临时表一样工作,但只能在这些进程之间进行访问,因此我不会尝试使用它(不应允许其蒸发)。

链接: 我的泡菜代码来自这里:How can I use pickle to save a dict?