基本的pyodbc批量插入

时间:2016-05-03 15:55:13

标签: python sql-server pyodbc

在python脚本中,我需要对一个数据源运行查询,并将该查询中的每一行插入到不同数据源的表中。我通常使用带有tsql链接服务器连接的单个insert / select语句执行此操作,但我没有与此特定数据源的链接服务器连接。

我无法找到一个简单的pyodbc示例。我在这里是怎么做的,但我猜测在循环中执行插入语句的速度非常慢。

result = ds1Cursor.execute(selectSql)

for row in result:
    insertSql = "insert into TableName (Col1, Col2, Col3) values (?, ?, ?)"
    ds2Cursor.execute(insertSql, row[0], row[1], row[2])
    ds2Cursor.commit()

使用pyodbc插入记录有更好的批量方式吗?或者这是一种相对有效的方法来做到这一点。我使用的是SqlServer 2012,以及最新的pyodbc和python版本。

4 个答案:

答案 0 :(得分:10)

处理此问题的最佳方法是使用pyodbc函数executemany

ds1Cursor.execute(selectSql)
result = ds1Cursor.fetchall()


ds2Cursor.executemany('INSERT INTO [TableName] (Col1, Col2, Col3) VALUES (?, ?, ?)', result)
ds2Cursor.commit()

答案 1 :(得分:8)

这是一个可以批量插入SQL Server数据库的函数。

import pyodbc
import contextlib

def bulk_insert(table_name, file_path):
    string = "BULK INSERT {} FROM '{}' (WITH FORMAT = 'CSV');"
    with contextlib.closing(pyodbc.connect("MYCONN")) as conn:
        with contextlib.closing(conn.cursor()) as cursor:
            cursor.execute(string.format(table_name, file_path))
        conn.commit()
        conn.close()

这肯定有效。

更新:我已经注意到评论以及定期编码,pyodbc比pypyodbc得到更好的支持。

答案 2 :(得分:1)

您应该将 executemanycursor.fast_executemany = True 一起使用,以提高性能。

pyodbc 的默认行为是运行多次插入,但这是低效的。通过应用 fast_executemany,您可以显着提高性能。

这是一个例子:

connection = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server}',host='host', database='db', user='usr', password='foo')
cursor = connection.cursor()

# I'm the important line
cursor.fast_executemany = True

sql = "insert into TableName (Col1, Col2, Col3) values (?, ?, ?)"
tuples=[('foo','bar', 'ham'), ('hoo','far', 'bam')]
cursor.executemany(sql, tuples)
cursor.commit()
cursor.close()
connection.close()

Docs。 请注意,此功能自 4.0.19 Oct 23, 2017

起可用

答案 3 :(得分:0)

由于pymssql库(which seems to be under development again)的终止,我们开始使用Zillow聪明人开发的cTDS library,令人惊讶的是它支持FreeTDS批量插入。

顾名思义,cTDS是用FreeTDS库顶部的C语言编写的,这使其速度非常快。恕我直言,这是将大量插入SQL Server的最佳方法,因为ODBC驱动程序不支持大量插入,并且建议的executemanyfast_executemany并不是真正的大量插入操作。 BCP工具和T-SQL批量插入有它的局限性,因为它需要SQL Server可以访问该文件,而在许多情况下这可能会破坏交易。

以下是批量插入CSV文件的幼稚实现。请原谅我的任何错误,我在没有测试的情况下从脑海中写下来。

我不知道为什么,但是对于使用Latin1_General_CI_AS的服务器,我需要使用ctds.SqlVarChar包装进入NVarChar列的数据。 I opened an issue about this but developers said the naming is correct,所以我更改了代码以保持心理健康。

import csv
import ctds

def _to_varchar(txt: str) -> ctds.VARCHAR:
    """
    Wraps strings into ctds.NVARCHAR.
    """
    if txt == "null":
        return None
    return ctds.SqlNVarChar(txt)

def _to_nvarchar(txt: str) -> ctds.VARCHAR:
    """
    Wraps strings into ctds.VARCHAR.
    """
    if txt == "null":
        return None
    return ctds.SqlVarChar(txt.encode("utf-16le"))

def read(file):
    """
    Open CSV File. 
    Each line is a column:value dict.
    https://docs.python.org/3/library/csv.html?highlight=csv#csv.DictReader
    """
    with open(file, newline='') as csvfile:
        reader = csv.DictReader(csvfile)
        for row in reader:
            yield row

def transform(row):
    """
    Do transformations to data before loading.

    Data specified for bulk insertion into text columns (e.g. VARCHAR,
    NVARCHAR, TEXT) is not encoded on the client in any way by FreeTDS.
    Because of this behavior it is possible to insert textual data with
    an invalid encoding and cause the column data to become corrupted.

    To prevent this, it is recommended the caller explicitly wrap the
    the object with either ctds.SqlVarChar (for CHAR, VARCHAR or TEXT
    columns) or ctds.SqlNVarChar (for NCHAR, NVARCHAR or NTEXT columns).
    For non-Unicode columns, the value should be first encoded to
    column’s encoding (e.g. latin-1). By default ctds.SqlVarChar will
    encode str objects to utf-8, which is likely incorrect for most SQL
    Server configurations.

    https://zillow.github.io/ctds/bulk_insert.html#text-columns
    """
    row["col1"] = _to_datetime(row["col1"])
    row["col2"] = _to_int(row["col2"])
    row["col3"] = _to_nvarchar(row["col3"])
    row["col4"] = _to_varchar(row["col4"])

    return row

def load(rows):
    stime = time.time()

    with ctds.connect(**DBCONFIG) as conn:
        with conn.cursor() as curs:
            curs.execute("TRUNCATE TABLE MYSCHEMA.MYTABLE")

        loaded_lines = conn.bulk_insert("MYSCHEMA.MYTABLE", map(transform, rows))

    etime = time.time()
    print(loaded_lines, " rows loaded in ", etime - stime)

if __name__ == "__main__":
    load(read('data.csv'))