Postgres不会使用索引,具体取决于where

时间:2017-01-17 20:26:25

标签: postgresql indexing database-optimization query-planner

我一直在修补/阅读一段时间,但是找不到任何可以在这里工作的优化...我已经在联接中索引了相关的ID,我尝试了手动真空,我也尝试了群集一个索引,以便查询优化器可能不会认为扫描整个表更有效,因为有些分散的行(虽然我对查询规划并不是很了解)。

我正在尝试获取单个id的连接结果(用于调试目的)。我发现对一些单个ID的查询需要大约2分钟,而大多数(99%?)在1秒内返回。这里有一些explain analyze(我用sed更改了一些名称以保密):

main=> explain analyze SELECT e.customer_id, l.*
            FROM abc.encounter e 
            JOIN abc.log l
            ON e.encounter_id = l.encounter_id
            AND e.customer_id = '1655563';
                                                                     QUERY PLAN                                                                      
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=2751.69..2566740.95 rows=13262 width=75) (actual time=122038.725..226694.004 rows=249 loops=1)
   Hash Cond: (l.encounter_id = e.encounter_id)
   ->  Seq Scan on log l  (cost=0.00..2190730.92 rows=99500192 width=66) (actual time=0.005..120825.675 rows=99500192 loops=1)
   ->  Hash  (cost=2742.81..2742.81 rows=710 width=18) (actual time=0.309..0.309 rows=89 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 13kB
         ->  Bitmap Heap Scan on encounter e  (cost=17.93..2742.81 rows=710 width=18) (actual time=0.037..0.197 rows=89 loops=1)
               Recheck Cond: (customer_id = '1655563'::text)
               Heap Blocks: exact=46
               ->  Bitmap Index Scan on idx_abc_encounter_customer_id  (cost=0.00..17.76 rows=710 width=0) (actual time=0.025..0.025 rows=89 loops=1)
                     Index Cond: (customer_id = '1655563'::text)
 Planning time: 0.358 ms
 Execution time: 226694.311 ms
(12 rows)

main=> explain analyze SELECT e.customer_id, l.*
            FROM abc.encounter e 
            JOIN abc.log l
            ON e.encounter_id = l.encounter_id
            AND e.customer_id = '121652491';
                                                                      QUERY PLAN                                                                      
------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=36.67..53168.06 rows=168 width=75) (actual time=0.090..0.422 rows=11 loops=1)
   ->  Index Scan using idx_abc_encounter_customer_id on encounter e  (cost=0.43..40.53 rows=9 width=18) (actual time=0.017..0.047 rows=17 loops=1)
         Index Cond: (customer_id = '121652491'::text)
   ->  Bitmap Heap Scan on log l  (cost=36.24..5888.00 rows=1506 width=66) (actual time=0.016..0.017 rows=1 loops=17)
         Recheck Cond: (encounter_id = e.encounter_id)
         Heap Blocks: exact=6
         ->  Bitmap Index Scan on idx_abc_log_encounter_id  (cost=0.00..35.86 rows=1506 width=0) (actual time=0.013..0.013 rows=1 loops=17)
               Index Cond: (encounter_id = e.encounter_id)
 Planning time: 0.361 ms
 Execution time: 0.478 ms
(10 rows)

我还要补充一点,对于长时间运行的查询,即使2分钟后只返回250行,添加“LIMIT 100”也可以立即返回查询。我调查了速度是否与查询返回的数据量有关,我没有看到任何明显的趋势。我不禁觉得Postgres错误地(100x?)关于哪种方法会更快。我有什么选择?

1 个答案:

答案 0 :(得分:3)

PostgreSQL对encounter的行计数估计值差不多是10倍。我的第一次尝试就是改善它。

为此,您可以更改列的统计目标:

ALTER TABLE abc.encounter ALTER customer_id SET STATISTICS 1000;

随后的ANALYZE将为该列收集更好的统计信息。如果1000还不够,请尝试10000.通过更好的行数估计,您有更好的机会获得最佳计划。

如果与顺序扫描相比,嵌套循环连接的重复索引扫描的成本仍然被高估,则可以将参数random_page_cost从其默认值4降低到更接近seq_page_cost的值(默认1)。这将使PostgreSQL偏向于嵌套循环连接。