使用SQL Server改善熊猫的to_sql()性能

时间:2020-07-30 10:15:51

标签: python sql-server pandas sqlalchemy pyodbc

我之所以来找你,是因为我无法解决 pandas.DataFrame.to_sql()方法的问题。

我已经在脚本和数据库之间建立了连接,可以发送查询,但实际上这对我来说太慢了。

我想找到一种方法来改善脚本的性能。也许有人会找到解决方案?

这是我的代码:

  engine = sqlalchemy.create_engine(con['sql']['connexion_string'])
  conn = engine.connect()
  metadata = sqlalchemy.Metadata()
  try : 
    if(con['sql']['strategy'] == 'NEW'): 
      query = sqlalchemy.Table(con['sql']['table'],metadata).delete()
      conn.execute(query)
      Sql_to_deploy.to_sql(con['sql']['table'],engine,if_exists='append',index = False,chunksize = 1000,method = 'multi')
    elif(con['sql']['strategy'] == 'APPEND'):
      Sql_to_deploy.to_sql(con['sql']['table'],engine,if_exists='append',index = False,chunksize = 1000,method = 'multi')
    else:
      pass
  except Exception as e:
    print(type(e))

当我撤消chunksize和方法参数时,它的工作速度太慢了,这是它太慢的时刻( 3万行大约3分钟)。输入这些参数后,我得到一个sqlalchemy.exc.ProgrammingError...

感谢您的帮助!

2 个答案:

答案 0 :(得分:0)

我合成了一个36k行的数据帧。这总是在<1.5s内插入。还有一个笨拙的select和昂贵的where子句,以及一个group by,它随着表的增长而变慢,但总是<0.5s

  1. 没有索引,因此插入速度很快
  2. 没有索引来帮助选择
  3. 在笔记本电脑上的docker容器内的mariadb上运行。所以并不是所有的事情都得到优化
  4. 所有默认设置都表现良好

更多信息

  1. 您的表上有哪些索引?经验法则越少越好。索引更多,插入速度更慢
  2. 从这个综合案例中您看到什么时间?
import numpy as np
import pandas as pd
import random, time
import sqlalchemy

a = np.array(np.meshgrid([2018,2019,2020], [1,2,3,4,5,6,7,8,9,10,11,12], 
                     [f"Stock {i+1}" for i in range(1000)],
                    )).reshape(3,-1)
a = [a[0], a[1], a[2], [round(random.uniform(-1,2.5),1) for e in a[0]]]
df1= pd.DataFrame({"Year":a[0], "Month":a[1], "Stock":a[2], "Sharpe":a[3], })


temptable = "tempx"

engine = sqlalchemy.create_engine('mysql+pymysql://sniffer:sniffer@127.0.0.1/sniffer')
conn = engine.connect()
try:
#     conn.execute(f"drop table {temptable}")
    pass
except sqlalchemy.exc.OperationalError:
    pass # ignore drop error if table does not exist
start = time.time()
df1.to_sql(name=temptable,con=engine, index=False, if_exists='append')
curr = conn.execute(f"select count(*) as c from {temptable}")
res = [{curr.keys()[i]:v for i,v in enumerate(t)} for t in curr.fetchall()]
print(f"Time: {time.time()-start:.2f}s database count:{res[0]['c']}, dataframe count:{len(df1)}")
curr.close()
start = time.time()
curr = conn.execute(f"""select Year, count(*) as c 
                        from {temptable} 
                        where Month=1 
                        and Sharpe between 1 and 2 
                        and stock like '%%2%%'
                        group by Year""")
res = [{curr.keys()[i]:v for i,v in enumerate(t)} for t in curr.fetchall()]
print(f"Time: {time.time()-start:.2f}s database result:{res} {curr.keys()}")
curr.close()
conn.close()

输出

Time: 1.23s database count:360000, dataframe count:36000
Time: 0.27s database result:[{'Year': '2018', 'c': 839}, {'Year': '2019', 'c': 853}, {'Year': '2020', 'c': 882}] ['Year', 'c']

答案 1 :(得分:0)

对于mssql+pyodbc,如果您获得to_sql的最佳性能,

  1. 使用Microsoft的SQL Server ODBC驱动程序,
  2. 在您的fast_executemany=True通话中启用create_engine

例如,这段代码在我的网络上仅运行了3秒钟以上:

from time import time
import pandas as pd
import sqlalchemy as sa

ngn_local = sa.create_engine("mssql+pyodbc://mssqlLocal64")
ngn_remote = sa.create_engine(
    (
        "mssql+pyodbc://sa:_whatever_@192.168.0.199/mydb"
        "?driver=ODBC+Driver+17+for+SQL+Server"
    ),
    fast_executemany=True,
)

df = pd.read_sql_query(
    "SELECT * FROM MillionRows WHERE ID <= 30000", ngn_local
)

t0 = time()
df.to_sql("pd_test", ngn_remote, index=False, if_exists="replace")
print(f"{time() - t0} seconds")

而对于fast_executemany=False(这是默认设置),相同的过程需要143秒(2.4分钟)。