我正在做一个生成大量数据的个人项目,我认为将其存储在本地数据库中是有意义的。但是,随着数据库的增长,我看到了疯狂的减速,这使其无法运行。
我做了一个简单的测试,显示我在做什么。我制作了一个字典,在其中进行了一堆本地处理(大约一百万个条目),然后将其批量插入SQLite DB中,然后循环并再次执行所有操作。这是代码:
from collections import defaultdict
import sqlite3
import datetime
import random
def log(s):
now = datetime.datetime.now()
print(str(now) + ": " + str(s))
def create_table():
conn = create_connection()
with conn:
cursor = conn.cursor()
sql = """
CREATE TABLE IF NOT EXISTS testing (
test text PRIMARY KEY,
number integer
);"""
cursor.execute(sql)
conn.close()
def insert_many(local_db):
sql = """INSERT INTO testing(test, number) VALUES(?, ?) ON CONFLICT(test) DO UPDATE SET number=number+?;"""
inserts = []
for key, value in local_db.items():
inserts.append((key, value, value))
conn = create_connection()
with conn:
cursor = conn.cursor()
cursor.executemany(sql, inserts)
conn.close()
def main():
i = 0
log("Starting to process records")
for i in range(1, 21):
local_db = defaultdict(int)
for j in range(0, 1000000):
s = "Testing insertion " + str(random.randrange(100000000))
local_db[s] += 1
log("Created local DB for " + str(1000000 * i) + " records")
insert_many(local_db)
log("Finished inserting " + str(1000000 * i) + " records")
def create_connection():
conn = None
try:
conn = sqlite3.connect('/home/testing.db')
except Error as e:
print(e)
return conn
if __name__ == '__main__':
create_table()
main()
这会持续一秒钟,然后像疯了一样慢下来。这是我刚得到的输出:
2019-10-23 15:28:59.211036: Starting to process records
2019-10-23 15:29:01.308668: Created local DB for 1000000 records
2019-10-23 15:29:10.147762: Finished inserting 1000000 records
2019-10-23 15:29:12.258012: Created local DB for 2000000 records
2019-10-23 15:29:28.752352: Finished inserting 2000000 records
2019-10-23 15:29:30.853128: Created local DB for 3000000 records
2019-10-23 15:39:12.826357: Finished inserting 3000000 records
2019-10-23 15:39:14.932100: Created local DB for 4000000 records
2019-10-23 17:21:37.257651: Finished inserting 4000000 records
...
如您所见,前一百万个插入需要9秒,接下来的一百万个需要16秒,然后膨胀到10分钟,然后是一个小时40分钟(!)。我正在做一些奇怪的事情导致这种疯狂的速度减慢,还是这是sqlite的局限性?
答案 0 :(得分:2)
使用您的程序进行一个(较小的)修改,我得到了非常合理的时序,如下所示。修改是使用sqlite3.connect
代替pysqlite.connect
。
#条注释为近似值。
2019-10-23 13:00:37.843759: Starting to process records
2019-10-23 13:00:40.253049: Created local DB for 1000000 records
2019-10-23 13:00:50.052383: Finished inserting 1000000 records # 12s
2019-10-23 13:00:52.065007: Created local DB for 2000000 records
2019-10-23 13:01:08.069532: Finished inserting 2000000 records # 18s
2019-10-23 13:01:10.073701: Created local DB for 3000000 records
2019-10-23 13:01:28.233935: Finished inserting 3000000 records # 20s
2019-10-23 13:01:30.237968: Created local DB for 4000000 records
2019-10-23 13:01:51.052647: Finished inserting 4000000 records # 23s
2019-10-23 13:01:53.079311: Created local DB for 5000000 records
2019-10-23 13:02:15.087708: Finished inserting 5000000 records # 24s
2019-10-23 13:02:17.075652: Created local DB for 6000000 records
2019-10-23 13:02:41.710617: Finished inserting 6000000 records # 26s
2019-10-23 13:02:43.712996: Created local DB for 7000000 records
2019-10-23 13:03:18.420790: Finished inserting 7000000 records # 37s
2019-10-23 13:03:20.420485: Created local DB for 8000000 records
2019-10-23 13:04:03.287034: Finished inserting 8000000 records # 45s
2019-10-23 13:04:05.593073: Created local DB for 9000000 records
2019-10-23 13:04:57.871396: Finished inserting 9000000 records # 54s
2019-10-23 13:04:59.860289: Created local DB for 10000000 records
2019-10-23 13:05:54.527094: Finished inserting 10000000 records # 57s
...
我认为速度下降的主要原因是将test
定义为TEXT PRIMARY KEY。正如这段摘录中删除了“ PRIMARY KEY”和“ ON CONFLICT”声明的运行所暗示的那样,这需要巨额的索引成本:
2019-10-23 13:26:22.627898: Created local DB for 10000000 records
2019-10-23 13:26:24.010171: Finished inserting 10000000 records
...
2019-10-23 13:26:58.350150: Created local DB for 20000000 records
2019-10-23 13:26:59.832137: Finished inserting 20000000 records
1000万个记录时不到1.4s,而2000万个记录时则不多。
答案 1 :(得分:2)
(更多评论而不是答案)
SQLite仅支持BTree索引。对于长度可能不同的字符串,树将存储行ID。读取树的复杂度为O(log(n)),其中n是表的长度,但是它将乘以读取和比较表中的字符串值的复杂度。因此,除非有充分的理由,否则最好将整数字段作为主键。
在这种情况下,情况变得更糟的是,您要插入的字符串具有相当长的共享前缀(“测试插入”),因此搜索第一个不匹配项需要更长的时间。
加速建议,按预期效果的大小排序:
@peak的答案通过不使用索引避免了整个问题。如果根本不需要索引,那么绝对是一种方法。