如何提高CockroachDB的INSERT性能(每秒行数)(比PostgreSQL慢20倍)

时间:2018-06-23 11:22:30

标签: sql database-performance bulkinsert sqlperformance cockroachdb

本文结尾处附带的Python3脚本创建了一个简单的表,该表包含5个INT列,其中3个带有索引。

然后它使用multi-row inserts来填充表格。

在一开始,它设法每秒插入约10000行。

Took   0.983 s to INSERT 10000 rows, i.e. performance =  10171 rows per second.
Took   0.879 s to INSERT 10000 rows, i.e. performance =  11376 rows per second.
Took   0.911 s to INSERT 10000 rows, i.e. performance =  10982 rows per second.
Took   1.180 s to INSERT 10000 rows, i.e. performance =   8477 rows per second.
Took   1.030 s to INSERT 10000 rows, i.e. performance =   9708 rows per second.
Took   1.114 s to INSERT 10000 rows, i.e. performance =   8975 rows per second.

但是当表已经包含约1000000行时,性能将下降到每秒约2000行。

Took   3.648 s to INSERT 10000 rows, i.e. performance =   2741 rows per second.
Took   3.026 s to INSERT 10000 rows, i.e. performance =   3305 rows per second.
Took   5.495 s to INSERT 10000 rows, i.e. performance =   1820 rows per second.
Took   6.212 s to INSERT 10000 rows, i.e. performance =   1610 rows per second.
Took   5.952 s to INSERT 10000 rows, i.e. performance =   1680 rows per second.
Took   4.872 s to INSERT 10000 rows, i.e. performance =   2053 rows per second.

为了进行比较:当使用PostgreSQL而不是CockroachDB时,性能始终约为每秒40000行。

Took   0.212 s to INSERT 10000 rows, i.e. performance =  47198 rows per second.
Took   0.268 s to INSERT 10000 rows, i.e. performance =  37335 rows per second.
Took   0.224 s to INSERT 10000 rows, i.e. performance =  44548 rows per second.
Took   0.307 s to INSERT 10000 rows, i.e. performance =  32620 rows per second.
Took   0.234 s to INSERT 10000 rows, i.e. performance =  42645 rows per second.
Took   0.262 s to INSERT 10000 rows, i.e. performance =  38124 rows per second.

Took   0.301 s to INSERT 10000 rows, i.e. performance =  33254 rows per second.
Took   0.220 s to INSERT 10000 rows, i.e. performance =  45547 rows per second.
Took   0.260 s to INSERT 10000 rows, i.e. performance =  38399 rows per second.
Took   0.222 s to INSERT 10000 rows, i.e. performance =  45136 rows per second.
Took   0.213 s to INSERT 10000 rows, i.e. performance =  46950 rows per second.
Took   0.211 s to INSERT 10000 rows, i.e. performance =  47436 rows per second.

使用CockroachDB时是否可以提高性能?

由于表是连续填充的,因此不能先填充表,然后再添加索引。


db_insert_performance_test.py

import random
from timeit import default_timer as timer
import psycopg2


def init_table(cur):
    """Create table and DB indexes"""
    cur.execute("""
        CREATE TABLE entities (a INT NOT NULL, b INT NOT NULL,
                               c INT NOT NULL, d INT NOT NULL,
                               e INT NOT NULL);""")
    cur.execute('CREATE INDEX a_idx ON entities (a);')
    cur.execute('CREATE INDEX b_idx ON entities (b);')
    cur.execute('CREATE INDEX c_idx ON entities (c);')
    # d and e does not need an index.


def create_random_event_value():
    """Returns a SQL-compatible string containing a value tuple"""
    def randval():
        return random.randint(0, 100000000)
    return f"({randval()}, {randval()}, {randval()}, {randval()}, {randval()})"


def generate_statement(statement_template, rows_per_statement):
        """Multi-row insert statement for 200 random entities like this:
        INSERT INTO entities (a, b, ...) VALUES (1, 2, ...), (6, 7, ...), ...
        """
        return statement_template.format(', '.join(
                create_random_event_value()
                for i in range(rows_per_statement)))


def main():
    """Write dummy entities into db and output performance."""

    # Config
    database = 'db'
    user = 'me'
    password = 'pwd'
    host, port = 'cockroach-db', 26257
    #host, port = 'postgres-db', 5432

    rows_per_statement = 200
    statements_per_round = 50
    rounds = 100
    statement_template = 'INSERT INTO entities (a, b, c, d, e) VALUES {}'

    # Connect to DB
    conn = psycopg2.connect(database=database, user=user, password=password,
                            host=host, port=port)
    conn.set_session(autocommit=True)
    cur = conn.cursor()

    init_table(cur)

    for _ in range(rounds):
        # statements_per_round multi-row INSERTs
        # with rows_per_statement rows each
        batch_statements = [generate_statement(statement_template,
                                               rows_per_statement)
                            for _ in range(statements_per_round)]

        # Measure insert duration
        start = timer()
        for batch_statement in batch_statements:
            cur.execute(batch_statement)
        duration = timer() - start

        # Calculate performance
        row_count = rows_per_statement * statements_per_round
        rows_per_second = int(round(row_count / duration))
        print('Took {:7.3f} s to INSERT {} rows, '
            'i.e. performance = {:>6} rows per second.'
            ''.format(duration, row_count, rows_per_second), flush=True)

    # Close the database connection.
    cur.close()
    conn.close()


if __name__ == '__main__':
    main()

要在此处快速重现我的结果,请使用docker-compose.yml

version: '2.4'

services:

  cockroach-db:
    image: cockroachdb/cockroach:v2.0.3
    command: start --insecure --host cockroach-db --vmodule=executor=2
    healthcheck:
      test: nc -z cockroach-db 26258

  cockroach-db-init:
    image: cockroachdb/cockroach:v2.0.3
    depends_on:
     - cockroach-db
    entrypoint: /cockroach/cockroach sql --host=cockroach-db --insecure -e "CREATE DATABASE db; CREATE USER me; GRANT ALL ON DATABASE db TO me;"

  postgres-db:
    image: postgres:10.4
    environment:
      POSTGRES_USER: me
      POSTGRES_PASSWORD: pwd
      POSTGRES_DB: db
    healthcheck:
      test: nc -z postgres-db 5432

  db-insert-performance-test:
    image: python:3.6
    depends_on:
     - cockroach-db-init
     - postgres-db
    volumes:
     - .:/code
    working_dir: /
    entrypoint: bash -c "pip3 install psycopg2 && python3 code/db_insert_performance_test.py"

要开始测试,只需运行docker-compose up db-insert-performance-test

3 个答案:

答案 0 :(得分:4)

CockroachDB将数据存储在“范围”内,并且当它们达到64MB时范围会拆分。最初,表格适合一个范围,因此每个插入都是单范围操作。范围拆分后,每个插入都需要包含多个范围以更新表和索引;因此那里的性能可能会下降。

答案 1 :(得分:0)

由于缺少固定的体积,性能可能也会受到影响。如果默认使用docker的文件系统,那也会对吞吐量产生负面影响。

答案 2 :(得分:0)

如上Livius所述,删除索引是许多数据仓库使用的一项技术,因为它有助于显着加快批量插入的速度。我知道您说的是行被“连续填充”,但这是否意味着来自其他连接?如果不是,则可以锁定表,删除索引,插入行,然后再重新添加索引。保留当前索引的问题是B-Tree索引需要在插入的每一行上进行处理,并定期重新平衡。

此外,您是否真的需要三个单独的索引,每个索引只有一列?通过将列添加到其他索引之一中,可以创建更少的索引吗?换句话说,您通常只使用该表的a子句中的列WHERE查询该表吗?它通常与其他列之一一起使用吗?也许您倾向于使用JOINb子句中使用a来查询由另一个表WHERE对该表进行查询。如果是这样,为什么不将索引合并为idx_ab (a, b)

只是一些想法。公平地说,我不了解CockroachDB,但是传统的关系数据库往往会以类似的方式工作。