为什么这个简单的测试用例使用SQLAlchemy插入100,000行比直接使用sqlite3驱动程序慢25倍?我在实际应用程序中看到了类似的减速。我做错了吗?
#!/usr/bin/env python
# Why is SQLAlchemy with SQLite so slow?
# Output from this program:
# SqlAlchemy: Total time for 100000 records 10.74 secs
# sqlite3: Total time for 100000 records 0.40 secs
import time
import sqlite3
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
Base = declarative_base()
DBSession = scoped_session(sessionmaker())
class Customer(Base):
__tablename__ = "customer"
id = Column(Integer, primary_key=True)
name = Column(String(255))
def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
engine = create_engine(dbname, echo=False)
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def test_sqlalchemy(n=100000):
init_sqlalchemy()
t0 = time.time()
for i in range(n):
customer = Customer()
customer.name = 'NAME ' + str(i)
DBSession.add(customer)
DBSession.commit()
print "SqlAlchemy: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
def init_sqlite3(dbname):
conn = sqlite3.connect(dbname)
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS customer")
c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
conn.commit()
return conn
def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
conn = init_sqlite3(dbname)
c = conn.cursor()
t0 = time.time()
for i in range(n):
row = ('NAME ' + str(i),)
c.execute("INSERT INTO customer (name) VALUES (?)", row)
conn.commit()
print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"
if __name__ == '__main__':
test_sqlalchemy(100000)
test_sqlite3(100000)
我尝试了很多变化(参见http://pastebin.com/zCmzDraU)
答案 0 :(得分:177)
在同步对数据库的更改时,SQLAlchemy ORM使用unit of work模式。这种模式远远超出了数据的简单“插入”。它包括使用属性检测系统接收在对象上分配的属性,该系统跟踪对象的更改,包括插入的所有行都在identity map中跟踪,这对于每行SQLAlchemy都必须如果尚未给出,则检索其“最后插入的id”,并且还涉及扫描要插入的行并根据需要对依赖性进行排序。对象也受到相当程度的簿记,以便保持所有这些运行,这对于大量的行同时可以创建大量数据结构所花费的大量时间,因此最好将这些块化。 / p>
基本上,工作单元是一个很大程度的自动化,以便自动执行将复杂对象图持久化到没有显式持久性代码的关系数据库的任务,并且这种自动化具有代价。
因此,ORM基本上不适用于高性能批量插入。这就是为什么SQLAlchemy有两个独立库的全部原因,如果你看http://docs.sqlalchemy.org/en/latest/index.html,你会注意到你会看到索引页面的两个不同的一半 - 一个用于ORM和一个为核心。如果不理解两者,就不能有效地使用SQLAlchemy。
对于快速批量插入的用例,SQLAlchemy提供了core,它是ORM构建的SQL生成和执行系统。有效地使用这个系统,我们可以生成一个与原始SQLite版本竞争的INSERT。下面的脚本说明了这一点,以及预先分配主键标识符的ORM版本,以便ORM可以使用executemany()来插入行。两个ORM版本同时将刷新量分为1000条记录,这会对性能产生重大影响。
此处观察到的运行时间是:
SqlAlchemy ORM: Total time for 100000 records 16.4133379459 secs
SqlAlchemy ORM pk given: Total time for 100000 records 9.77570986748 secs
SqlAlchemy Core: Total time for 100000 records 0.568737983704 secs
sqlite3: Total time for 100000 records 0.595796823502 sec
脚本:
import time
import sqlite3
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
Base = declarative_base()
DBSession = scoped_session(sessionmaker())
class Customer(Base):
__tablename__ = "customer"
id = Column(Integer, primary_key=True)
name = Column(String(255))
def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def test_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
for i in range(n):
customer = Customer()
customer.name = 'NAME ' + str(i)
DBSession.add(customer)
if i % 1000 == 0:
DBSession.flush()
DBSession.commit()
print "SqlAlchemy ORM: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
def test_sqlalchemy_orm_pk_given(n=100000):
init_sqlalchemy()
t0 = time.time()
for i in range(n):
customer = Customer(id=i+1, name="NAME " + str(i))
DBSession.add(customer)
if i % 1000 == 0:
DBSession.flush()
DBSession.commit()
print "SqlAlchemy ORM pk given: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
def test_sqlalchemy_core(n=100000):
init_sqlalchemy()
t0 = time.time()
engine.execute(
Customer.__table__.insert(),
[{"name":'NAME ' + str(i)} for i in range(n)]
)
print "SqlAlchemy Core: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
def init_sqlite3(dbname):
conn = sqlite3.connect(dbname)
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS customer")
c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
conn.commit()
return conn
def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
conn = init_sqlite3(dbname)
c = conn.cursor()
t0 = time.time()
for i in range(n):
row = ('NAME ' + str(i),)
c.execute("INSERT INTO customer (name) VALUES (?)", row)
conn.commit()
print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"
if __name__ == '__main__':
test_sqlalchemy_orm(100000)
test_sqlalchemy_orm_pk_given(100000)
test_sqlalchemy_core(100000)
test_sqlite3(100000)
另请参阅:http://docs.sqlalchemy.org/en/latest/faq/performance.html
答案 1 :(得分:19)
来自@zzzeek的优秀答案。对于那些想知道相同查询统计数据的人,我稍微修改了@zzzeek代码,在插入后立即查询这些相同的记录,然后将这些记录转换为dicts列表。
这是结果
SqlAlchemy ORM: Total time for 100000 records 11.9210000038 secs
SqlAlchemy ORM query: Total time for 100000 records 2.94099998474 secs
SqlAlchemy ORM pk given: Total time for 100000 records 7.51800012589 secs
SqlAlchemy ORM pk given query: Total time for 100000 records 3.07699990273 secs
SqlAlchemy Core: Total time for 100000 records 0.431999921799 secs
SqlAlchemy Core query: Total time for 100000 records 0.389000177383 secs
sqlite3: Total time for 100000 records 0.459000110626 sec
sqlite3 query: Total time for 100000 records 0.103999853134 secs
有趣的是,使用裸sqlite3查询仍然比使用SQLAlchemy Core快3倍。我想这是你为ResultProxy而不是一个简单的sqlite3行而付出的代价。
SQLAlchemy Core比使用ORM快8倍。因此,无论如何使用ORM进行查询都会慢得多。
这是我使用的代码:
import time
import sqlite3
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.sql import select
Base = declarative_base()
DBSession = scoped_session(sessionmaker())
class Customer(Base):
__tablename__ = "customer"
id = Column(Integer, primary_key=True)
name = Column(String(255))
def init_sqlalchemy(dbname = 'sqlite:///sqlalchemy.db'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def test_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
for i in range(n):
customer = Customer()
customer.name = 'NAME ' + str(i)
DBSession.add(customer)
if i % 1000 == 0:
DBSession.flush()
DBSession.commit()
print "SqlAlchemy ORM: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
t0 = time.time()
q = DBSession.query(Customer)
dict = [{'id':r.id, 'name':r.name} for r in q]
print "SqlAlchemy ORM query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"
def test_sqlalchemy_orm_pk_given(n=100000):
init_sqlalchemy()
t0 = time.time()
for i in range(n):
customer = Customer(id=i+1, name="NAME " + str(i))
DBSession.add(customer)
if i % 1000 == 0:
DBSession.flush()
DBSession.commit()
print "SqlAlchemy ORM pk given: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
t0 = time.time()
q = DBSession.query(Customer)
dict = [{'id':r.id, 'name':r.name} for r in q]
print "SqlAlchemy ORM pk given query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"
def test_sqlalchemy_core(n=100000):
init_sqlalchemy()
t0 = time.time()
engine.execute(
Customer.__table__.insert(),
[{"name":'NAME ' + str(i)} for i in range(n)]
)
print "SqlAlchemy Core: Total time for " + str(n) + " records " + str(time.time() - t0) + " secs"
conn = engine.connect()
t0 = time.time()
sql = select([Customer.__table__])
q = conn.execute(sql)
dict = [{'id':r[0], 'name':r[0]} for r in q]
print "SqlAlchemy Core query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"
def init_sqlite3(dbname):
conn = sqlite3.connect(dbname)
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS customer")
c.execute("CREATE TABLE customer (id INTEGER NOT NULL, name VARCHAR(255), PRIMARY KEY(id))")
conn.commit()
return conn
def test_sqlite3(n=100000, dbname = 'sqlite3.db'):
conn = init_sqlite3(dbname)
c = conn.cursor()
t0 = time.time()
for i in range(n):
row = ('NAME ' + str(i),)
c.execute("INSERT INTO customer (name) VALUES (?)", row)
conn.commit()
print "sqlite3: Total time for " + str(n) + " records " + str(time.time() - t0) + " sec"
t0 = time.time()
q = conn.execute("SELECT * FROM customer").fetchall()
dict = [{'id':r[0], 'name':r[0]} for r in q]
print "sqlite3 query: Total time for " + str(len(dict)) + " records " + str(time.time() - t0) + " secs"
if __name__ == '__main__':
test_sqlalchemy_orm(100000)
test_sqlalchemy_orm_pk_given(100000)
test_sqlalchemy_core(100000)
test_sqlite3(100000)
我还测试了没有将查询结果转换为dicts并且统计数据类似:
SqlAlchemy ORM: Total time for 100000 records 11.9189999104 secs
SqlAlchemy ORM query: Total time for 100000 records 2.78500008583 secs
SqlAlchemy ORM pk given: Total time for 100000 records 7.67199993134 secs
SqlAlchemy ORM pk given query: Total time for 100000 records 2.94000005722 secs
SqlAlchemy Core: Total time for 100000 records 0.43700003624 secs
SqlAlchemy Core query: Total time for 100000 records 0.131000041962 secs
sqlite3: Total time for 100000 records 0.500999927521 sec
sqlite3 query: Total time for 100000 records 0.0859999656677 secs
使用SQLAlchemy Core查询的速度比ORM快20倍。
重要的是要注意这些测试非常肤浅,不应该太认真。我可能会遗漏一些可能完全改变统计数据的明显技巧。
衡量效果改进的最佳方法是直接在您自己的应用程序中。不要将我的统计数据视为理所当然。
答案 2 :(得分:0)