带有CLOB的cx_Oracle executemany

时间:2011-07-08 13:17:35

标签: python oracle10g cx-oracle

我正在尝试解析多个CSV并使用cx_Oracle将其数据插入表中。我使用execute插入表没有问题但是当我尝试使用executemany执行相同的过程时,我得到一个错误。使用执行的代码是

with open(key,'r') as file:
    for line in file:
        data = data.split(",")
        query = "INSERT INTO " + tables[key] + " VALUES ("
        for col in range(len(data)):
            query += ":" + str(col) + ","
        query = query[:-1] + ")"            
        cursor.execute(query, data)

但是当我用

替换它时
with open(key,'r') as file:
    list = []
    for line in file:
        data = data.split(",")
        list.append(data)
    if len(list) > 0:
        query = "INSERT INTO " + tables[key] + " VALUES ("
        for col in range(len(data)):
            query += ":" + str(col) + ","
        query = query[:-1] + ")"            
        cursor.prepare(query)
        cursor.executemany(None,list)

当尝试插入具有CLOB列且数据超过4000字节的表时,我得到“ValueError:字符串数据太大”。当表没有CLOB列时,Executemany工作得很好。有没有办法让cx_Oracle在执行executemany时将相应的列视为CLOB?

1 个答案:

答案 0 :(得分:4)

尝试将大列的输入大小设置为cx_Oracle.CLOB。如果您有二进制数据可能不起作用,但应该适用于CSV中的任何文本。 2K值可能低于它需要的值。

请注意,如果涉及executemany列,CLOB似乎要慢得多,但仍然比重复执行更好:

def _executemany(cursor, sql, data):
    '''
    run the parameterized sql with the given dataset using cursor.executemany 
    if any column contains string values longer than 2k, use CLOBS to avoid "string
    too large" errors.

    @param sql parameterized sql, with parameters named according to the field names in data
    @param data array of dicts, one per row to execute.  each dict must have fields corresponding
                to the parameter names in sql
    '''
    input_sizes = {}
    for row in data:
        for k, v in row.items():
            if isinstance(v, basestring) and len(v) > 2000:
                input_sizes[k] = cx_Oracle.CLOB
    cursor.setinputsizes(**input_sizes)
    cursor.executemany(sql, data)