如何使用SQLAlchemy加快对MySQL的批量插入?

时间:2019-09-29 08:15:12

标签: mysql python-3.x sqlalchemy

我刚刚看到,使用SQLAlchemy(即使使用session.bulk_save_objects(objects),批量插入MySQL / MariaDB数据库也很慢。我怎样才能更快?

MVCE

from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import text
import click
import json
import sqlalchemy
import time
import uuid

Base = declarative_base()


class KeyValue(Base):
    __tablename__ = "KeyValue"
    key = Column(String(36), primary_key=True)
    value = Column(Text)

    def __repr__(self):
        return f"KeyValue(key='{self.key}', value='{self.value}')"


def run_benchmark(SQLALCHEMY_DATABASE_URI, n=1000, benchmark_type='orm-bulk'):
    engine = sqlalchemy.create_engine(SQLALCHEMY_DATABASE_URI)
    connection = engine.connect()

    Base.metadata.create_all(engine)

    Session = sessionmaker(bind=engine)
    session = Session()

    keys = [str(uuid.uuid4()) for i in range(n)]
    values = [json.dumps([str(uuid.uuid4()) for _ in range(100)]) for i in range(n)]
    if benchmark_type == 'orm-bulk':
        benchmark_orm_bulk_insert(session, keys, values)
    elif benchmark_type == 'print':
        print_query(keys, values)



def benchmark_orm_bulk_insert(session, keys, values):
    t0 = time.time()
    objects = [
        KeyValue(key=key, value=value)
        for key, value in zip(keys, values)
    ]
    session.bulk_save_objects(objects)
    session.commit()
    t1 = time.time()
    print(f"Inserted {len(keys)} entries in {t1 - t0:0.2f}s with ORM-Bulk "
          f"({len(keys)/(t1 - t0):0.2f} inserts/s).")


def print_query(keys, values):
    print("INSERT INTO KeyValue (`key`, `value`) VALUES")
    for i, (key, value) in enumerate(zip(keys, values)):
        if i == 0:
            print(f"({json.dumps(key)}, {json.dumps(value)})")
        else:
            print(f", ({json.dumps(key)}, {json.dumps(value)})")
    print(";")


@click.command()
@click.option("-n", "n", required=True, type=int)
@click.option(
    "--mode",
    "mode",
    required=True,
    type=click.Choice(["orm-bulk", "print"]),
)
def entry_point(n, mode):
    run_benchmark("mysql+pymysql://root:password@localhost/benchmark", n, mode)


if __name__ == "__main__":
    entry_point()

这给出了:

$ python3 benchmark.py -n 10_000 --mode orm-bulk           
Inserted 10000 entries in 3.28s with ORM-Bulk (3048.23 inserts/s).

# Using extended INSERT statements
$ python3 benchmark.py -n 10_000 --mode print > inserts.txt
$ time mysql benchmark < inserts.txt

real    2,93s
user    0,27s
sys 0,03s

因此,SQLAlchemy批量插入速度为每秒3048次插入,而原始SQL查询具有3412次插入。

相关问题,但不是主要问题

请注意,两个数字都离 High-speed inserts with MySQL中提到的每秒313,000次插入。使用

LOAD DATA LOCAL INFILE 'data.csv' INTO TABLE KeyValue FIELDS TERMINATED BY ',' ENCLOSED BY '"' IGNORE 1 LINES;

我达到了2.22s的执行时间(4500次插入/秒),仍然要短得多。通过将quotechar从"更改为'(减少大量转义),我得到了1.55s(6451个插入/ s)。

bulk_insert_buffer_size更改为256MB也无济于事(howto

将MySQL存储引擎从InnoDB更改为MyISAM,将速度更改为0.32秒(31250次插入/秒)!

尝试每个运行3次的其他存储引擎:

  • CSV(不允许输入任何密钥!):0.40s / 0.53s / 0.46s
  • 咏叹调(无主键):0.38s / 0.38s / 0.38s
  • 咏叹调(带主键):0.47s / 0.49s / 0.45s
  • MyISAM(无主键):0.26s / 0.26s / 0.26s => 38461个插入/秒
  • MyISAM(具有主键):0.32s / 0.30s / 0.29s

0 个答案:

没有答案