使用索引扫描,PostgreSQL查询运行得更快,但引擎选择散列连接

时间:2012-05-17 20:43:10

标签: postgresql indexing query-optimization postgresql-performance

查询:

SELECT "replays_game".*
FROM "replays_game"
INNER JOIN
 "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 50027

如果我设置SET enable_seqscan = off,那么它会做快速的事情,即:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.019..2.412 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..5.41 rows=1 width=72) (actual time=0.017..0.017 rows=1 loops=3398)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 65.437 ms

但是如果没有可怕的enable_seqscan,它会选择做一个更慢的事情:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=7330.18..18145.24 rows=3395 width=72) (actual time=92.380..535.422 rows=3398 loops=1)
   Hash Cond: (replays_playeringame.game_id = replays_game.id)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.020..2.899 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Hash  (cost=3668.08..3668.08 rows=151208 width=72) (actual time=90.842..90.842 rows=151208 loops=1)
         Buckets: 1024  Batches: 32 (originally 16)  Memory Usage: 1025kB
         ->  Seq Scan on replays_game  (cost=0.00..3668.08 rows=151208 width=72) (actual time=0.020..29.061 rows=151208 loops=1)
 Total runtime: 535.821 ms

以下是相关指数:

Index "public.replays_game_pkey"
 Column |  Type   | Definition
--------+---------+------------
 id     | integer | id
primary key, btree, for table "public.replays_game"

Index "public.replays_playeringame_player_id"
  Column   |  Type   | Definition
-----------+---------+------------
 player_id | integer | player_id
btree, for table "public.replays_playeringame"

所以我的问题是,Postgres错误估计两种加入方式的相对成本,我做错了什么?我在成本估算中看到认为哈希联接会更快。它对指数加入成本的估计偏差了500倍。

我怎样才能给Postgres更多线索?在运行上述所有操作之前,我确实运行了VACUUM ANALYZE

有趣的是,如果我为一个游戏规则较小的玩家运行此查询,Postgres会选择执行索引扫描+嵌套循环。因此,关于大量游戏的一些事情会发现这种不良行为,其中相对估计成本与实际估算成本不符。

最后,我应该使用Postgres吗?我不希望成为数据库调优方面的专家,所以我正在寻找一个数据库,这个数据库能够在一个尽职尽责的开发人员的注意力水平下表现得相当好,而不是专门的DBA。我担心如果我坚持使用Postgres,我会有一连串的问题,这会迫使我成为Postgres专家,也许另一个DB会更宽容一种更随意的方法。


Postgres专家(RhodiumToad)审核了我的完整数据库设置(http://pastebin.com/77QuiQSp)并推荐了set cpu_tuple_cost = 0.1。这给了一个戏剧性的加速:http://pastebin.com/nTHvSHVd

或者,切换到MySQL也很好地解决了这个问题。我在我的OS X盒子上默认安装了MySQL和Postgres,MySQL的速度提高了2倍,比较了通过反复执行查询而“预热”的查询。在“冷”查询中,即第一次执行给定查询时,MySQL的速度要快5到150倍。冷查询的性能对我的特定应用程序非常重要。

就我而言,最大的问题仍然很突出 - Postgres是否需要更多的摆弄和配置才能比MySQL运行得更好?例如,请考虑评论者提供的建议都不起作用。

4 个答案:

答案 0 :(得分:11)

我的猜测是你使用默认的random_page_cost = 4,这太高了,使索引扫描成本太高。

我尝试用这个脚本重建2个表:

CREATE TABLE replays_game (
    id integer NOT NULL,
    PRIMARY KEY (id)
);

CREATE TABLE replays_playeringame (
    player_id integer NOT NULL,
    game_id integer NOT NULL,
    PRIMARY KEY (player_id, game_id),
    CONSTRAINT replays_playeringame_game_fkey
        FOREIGN KEY (game_id) REFERENCES replays_game (id)
);

CREATE INDEX ix_replays_playeringame_game_id
    ON replays_playeringame (game_id);

-- 150k games
INSERT INTO replays_game
SELECT generate_series(1, 150000);

-- ~150k players, ~2 games each
INSERT INTO replays_playeringame
select trunc(random() * 149999 + 1), generate_series(1, 150000);

INSERT INTO replays_playeringame
SELECT *
FROM
    (
        SELECT
            trunc(random() * 149999 + 1) as player_id,
            generate_series(1, 150000) as game_id
    ) AS t
WHERE
    NOT EXISTS (
        SELECT 1
        FROM replays_playeringame
        WHERE
            t.player_id = replays_playeringame.player_id
            AND t.game_id = replays_playeringame.game_id
    )
;

-- the heavy player with 3000 games
INSERT INTO replays_playeringame
select 999999, generate_series(1, 3000);

默认值为4:

game=# set random_page_cost = 4;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                     QUERY PLAN                                                                      
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=1483.54..4802.54 rows=3000 width=4) (actual time=3.640..110.212 rows=3000 loops=1)
   Hash Cond: (replays_game.id = replays_playeringame.game_id)
   ->  Seq Scan on replays_game  (cost=0.00..2164.00 rows=150000 width=4) (actual time=0.012..34.261 rows=150000 loops=1)
   ->  Hash  (cost=1446.04..1446.04 rows=3000 width=4) (actual time=3.598..3.598 rows=3000 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 106kB
         ->  Bitmap Heap Scan on replays_playeringame  (cost=67.54..1446.04 rows=3000 width=4) (actual time=0.586..2.041 rows=3000 loops=1)
               Recheck Cond: (player_id = 999999)
               ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..66.79 rows=3000 width=0) (actual time=0.560..0.560 rows=3000 loops=1)
                     Index Cond: (player_id = 999999)
 Total runtime: 110.621 ms

将它降低到2:

game=# set random_page_cost = 2;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                  QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=45.52..4444.86 rows=3000 width=4) (actual time=0.418..27.741 rows=3000 loops=1)
   ->  Bitmap Heap Scan on replays_playeringame  (cost=45.52..1424.02 rows=3000 width=4) (actual time=0.406..1.502 rows=3000 loops=1)
         Recheck Cond: (player_id = 999999)
         ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..44.77 rows=3000 width=0) (actual time=0.388..0.388 rows=3000 loops=1)
               Index Cond: (player_id = 999999)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..0.99 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=3000)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 28.542 ms
(8 rows)

如果使用SSD,我会将其进一步降低到1.1。

关于你的上一个问题,我认为你应该坚持使用postgresql。我有使用postgresql和mssql的经验,我需要将后续工作投入三倍,以使其执行一半以及前者。

答案 1 :(得分:9)

我运行了sayap的testbed-code(谢谢!),并进行了以下修改:

  • 代码运行四次,random_page_cost设置为8,4,2,1;以该顺序。 (cpc = 8用于填充磁盘缓冲区缓存)
  • 用减少的(1 / 2,1 / 4,1 / 8)部分硬击手重复测试(分别为:3K,1K5,750和375硬挺;其余记录保持不变。
  • 以work_mem的较低设置(64K,最小值)重复这些4 * 4测试。

在这次跑步之后,我做了相同的跑步,但是扩大了十倍:拥有1M5记录(30K硬打击手)

目前,我正在进行相同的测试,规模扩大了一百倍,但初始化速度相当慢......

结果单元格中的条目是以毫秒为单位的总时间加上表示所选查询计划的字符串。 (只有少数计划出现)

Original 3K / 150K  work_mem=16M

rpc     |       3K      |       1K5     |       750     |       375
--------+---------------+---------------+---------------+------------
8*      | 50.8  H.BBi.HS| 44.3  H.BBi.HS| 38.5  H.BBi.HS| 41.0  H.BBi.HS
4       | 43.6  H.BBi.HS| 48.6  H.BBi.HS| 4.34  NBBi    | 1.33  NBBi
2       | 6.92  NBBi    | 3.51  NBBi    | 4.61  NBBi    | 1.24  NBBi
1       | 6.43  NII     | 3.49  NII     | 4.19  NII     | 1.18  NII


Original 3K / 150K work_mem=64K

rpc     |       3K      |       1K5     |       750     |       375
--------+---------------+---------------+---------------+------------
8*      | 74.2  H.BBi.HS| 69.6  NBBi    | 62.4  H.BBi.HS| 66.9  H.BBi.HS
4       | 6.67  NBBi    | 8.53  NBBi    | 1.91  NBBi    | 2.32  NBBi
2       | 6.66  NBBi    | 3.6   NBBi    | 1.77  NBBi    | 0.93  NBBi
1       | 7.81  NII     | 3.26  NII     | 1.67  NII     | 0.86  NII


Scaled 10*: 30K / 1M5  work_mem=16M

rpc     |       30K     |       15K     |       7k5     |       3k75
--------+---------------+---------------+---------------+------------
8*      | 623   H.BBi.HS| 556   H.BBi.HS| 531   H.BBi.HS| 14.9  NBBi
4       | 56.4  M.I.sBBi| 54.3  NBBi    | 27.1  NBBi    | 19.1  NBBi
2       | 71.0  NBBi    | 18.9  NBBi    | 9.7   NBBi    | 9.7   NBBi
1       | 79.0  NII     | 35.7  NII     | 17.7  NII     | 9.3   NII


Scaled 10*: 30K / 1M5  work_mem=64K

rpc     |       30K     |       15K     |       7k5     |       3k75
--------+---------------+---------------+---------------+------------
8*      | 729   H.BBi.HS| 722   H.BBi.HS| 723   H.BBi.HS| 19.6  NBBi
4       | 55.5  M.I.sBBi| 41.5  NBBi    | 19.3  NBBi    | 13.3  NBBi
2       | 70.5  NBBi    | 41.0  NBBi    | 26.3  NBBi    | 10.7  NBBi
1       | 69.7  NII     | 38.5  NII     | 20.0  NII     | 9.0   NII

Scaled 100*: 300K / 15M  work_mem=16M

rpc     |       300k    |       150K    |       75k     |       37k5
--------+---------------+---------------+---------------+---------------
8*      |7314   H.BBi.HS|9422   H.BBi.HS|6175   H.BBi.HS| 122   N.BBi.I
4       | 569   M.I.sBBi| 199   M.I.sBBi| 142   M.I.sBBi| 105   N.BBi.I
2       | 527   M.I.sBBi| 372   N.BBi.I | 198   N.BBi.I | 110   N.BBi.I
1       | 694   NII     | 362   NII     | 190   NII     | 107   NII

Scaled 100*: 300K / 15M  work_mem=64K

rpc     |       300k    |       150k    |       75k     |       37k5
--------+---------------+---------------+---------------+------------
8*      |22800 H.BBi.HS |21920 H.BBi.HS | 20630 N.BBi.I |19669  H.BBi.HS
4       |22095 H.BBi.HS |  284 M.I.msBBi| 205   B.BBi.I |  116  N.BBi.I
2       |  528 M.I.msBBi|  399  N.BBi.I | 211   N.BBi.I |  110  N.BBi.I
1       |  718 NII      |  364  NII     | 200   NII     |  105  NII

[8*] Note: the RandomPageCost=8 runs were only intended as a prerun to prime the disk buffer cache; the results should be ignored.

Legend for node types:
N := Nested loop
M := Merge join
H := Hash (or Hash join)
B := Bitmap heap scan
Bi := Bitmap index scan
S := Seq scan
s := sort
m := materialise

初步结论:

  • 原始查询的“工作集”太小:所有这些都适合核心,导致页面提取的成本被严重高估。将RPC设置为2(或1)可“解决”此问题,但一旦查询按比例放大,页面成本就会占主导地位,RPC = 4会变得相当甚至更好。

  • 将work_mem设置为较低的值是使优化器转换为索引扫描(而不是散列+位图扫描)的另一种方法。我发现的差异小于Sayap报道的差异。也许我有更多effective_cache_size,或者他忘了填充缓存?

  • 众所周知,优化器存在“倾斜”分布(以及“倾斜”或“尖峰”多维分布)的问题。初始3K / 150K硬挺器的1/4和1/8的测试显示此效果消失一次“峰值”趋于平缓。
  • 在2%边界处发生了一些事情:3000/150000与不超过2%的强硬者计划不同(更糟)的计划。这可能是直方图的粒度吗?

答案 2 :(得分:4)

这是一篇很老的帖子,但我刚遇到类似的问题很有帮助。

到目前为止,这是我的发现。鉴于replays_game中有151208行,点击某个项目的平均成本约为log(151208)=12。由于过滤后3395中有replays_playeringame条记录,因此平均费用为12*3395,相当高。此外,计划员高估了页面成本:它假设所有行都是随机分布的,而事实并非如此。如果这是真的,seq扫描会好得多。所以基本上,查询计划试图避免最糟糕的情况。

@ dsjoerg的问题是replays_playeringame(game_id)上没有索引。如果replays_playeringame(game_id)上有索引,则会始终使用索引扫描:扫描索引的成本将变为3395+12(或接近该值)。

@Neil建议在(player_id, game_id)上设置索引,该索引虽然接近但不准确。要拥有的正确索引是(game_id)(game_id, player_id)

答案 3 :(得分:2)

您可以使用(player_id, game_id)表上的多列replays_playeringame索引获得更好的执行计划。这避免了必须使用随机页面搜索来查找玩家ID的游戏ID。