我在postgres表上选择了一个有趣的案例:
advert (~2.5 million records)
id serial,
user_id integer (foreign key),
...
这是我的选择:
select count(*) from advert where user_id in USER_IDS_ARRAY
如果USER_IDS_ARRAY
长度< = 100,我接下来解释分析:
Aggregate (cost=18063.36..18063.37 rows=1 width=0) (actual time=0.362..0.362 rows=1 loops=1)
-> Index Only Scan using ix__advert__user_id on advert (cost=0.55..18048.53 rows=5932 width=0) (actual time=0.030..0.351 rows=213 loops=1)
Index Cond: (user_id = ANY ('{(...)}'))
Heap Fetches: 213
Planning time: 0.457 ms
Execution time: 0.392 ms
但USER_IDS_ARRAY
长度> 100:
Aggregate (cost=424012.09..424012.10 rows=1 width=0) (actual time=867.438..867.438 rows=1 loops=1)
-> Seq Scan on advert (cost=0.00..423997.11 rows=5992 width=0) (actual time=0.375..867.345 rows=213 loops=1)
Filter: (user_id = ANY ('{(...)}'))
Rows Removed by Filter: 2201318
Planning time: 0.261 ms
Execution time: 867.462 ms
无论USER_IDS_ARRAY中的user_id是什么,只有它的长度才重要。
是否有人有想法如何针对超过100个user_id优化此选择?
答案 0 :(得分:3)
如果SET enable_seqscan = OFF
仍然没有强制索引扫描,则意味着无法进行索引扫描。事实证明这里的指数是偏袒的。