我有一个大表(3000万行),其中有约10个jsonb
B树索引。
当我使用很少的条件创建查询时,查询相对较快。
当我添加更多条件时,尤其是索引jsonb
稀疏的条件(例如0到1,000,000之间的整数),查询速度会急剧下降。
我想知道jsonb
索引是否比本地索引慢?我希望通过切换到本机列而不是JSON来提高性能吗?
表定义:
id integer
type text
data jsonb
company_index ARRAY
exchange_index ARRAY
eligible boolean
示例查询:
SELECT id, data, type
FROM collection.bundles
WHERE ( (ARRAY['.X'] && bundles.exchange_index) AND
type IN ('discussion') AND
( ((data->>'sentiment_score')::bigint > 0 AND
(data->'display_tweet'->'stocktwit'->'id') IS NOT NULL) ) AND
( eligible = true ) AND
((data->'display_tweet'->'stocktwit')->>'id')::bigint IS NULL )
ORDER BY id DESC
LIMIT 50
输出:
Limit (cost=0.56..16197.56 rows=50 width=212) (actual time=31900.874..31900.874 rows=0 loops=1)
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
-> Index Scan using bundles2_id_desc_idx on bundles (cost=0.56..2401044.17 rows=7412 width=212) (actual time=31900.871..31900.871 rows=0 loops=1)
Filter: (eligible AND ('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text) AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 16093269
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
Planning time: 0.366 ms
Execution time: 31900.909 ms
注意:
此查询中使用的每个jsonb
条件都有jsonb
B树索引。 exchange_index
和company_index
具有GIN索引。
更新 在Laurenz更改查询后:
Limit (cost=150634.15..150634.27 rows=50 width=211) (actual time=15925.828..15925.828 rows=0 loops=1)
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Sort (cost=150634.15..150652.53 rows=7352 width=211) (actual time=15925.827..15925.827 rows=0 loops=1)
Sort Key: bundles.id DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Bitmap Heap Scan on bundles (cost=56666.15..150316.40 rows=7352 width=211) (actual time=15925.816..15925.816 rows=0 loops=1)
Recheck Cond: (('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text))
Filter: (eligible AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 273230
Heap Blocks: exact=175975
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> BitmapAnd (cost=56666.15..56666.15 rows=23817 width=0) (actual time=1895.890..1895.890 rows=0 loops=1)
Buffers: shared hit=37488 read=85559
I/O Timings: read=325.535
-> Bitmap Index Scan on bundles2_exchange_index_ops_idx (cost=0.00..6515.57 rows=863703 width=0) (actual time=218.690..218.690 rows=892669 loops=1)
Index Cond: ('{.X}'::text[] && exchange_index)
Buffers: shared hit=7 read=313
I/O Timings: read=1.458
-> Bitmap Index Scan on bundles_eligible_idx (cost=0.00..23561.74 rows=2476877 width=0) (actual time=436.719..436.719 rows=2569331 loops=1)
Index Cond: (eligible = true)
Buffers: shared hit=37473
-> Bitmap Index Scan on bundles2_type_idx (cost=0.00..26582.83 rows=2706276 width=0) (actual time=1052.267..1052.267 rows=2794517 loops=1)
Index Cond: (type = 'discussion'::text)
Buffers: shared hit=8 read=85246
I/O Timings: read=324.077
Planning time: 0.433 ms
Execution time: 15928.959 ms
答案 0 :(得分:3)
您所有的花式索引都没有使用,因此问题不在于它们是否快速。
这里有几件事在发挥作用:
在索引扫描期间看到dirtied
和written
页,我怀疑您的表中有很多“死元组”。当索引扫描访问它们并发现它们已死时,它会“杀死”这些索引条目,以便后续的索引扫描不必重复该工作。
如果重复查询,您可能会注意到块数和执行时间变短了。
您可以通过在表上运行VACUUM
或确保自动清理足够频繁地处理表来减少该问题。
但是,您的主要问题是LIMIT
子句会诱使PostgreSQL使用以下策略:
由于您只希望按具有索引的顺序排列50个结果行,因此只需按索引顺序检查表中的行,并丢弃所有不符合复杂条件的行,直到有50个结果为止。
不幸的是,它必须扫描16093319行,直到找到50个匹配。该表的“ id
高”行中的行与条件不匹配。 PostgreSQL不知道这种关联。
解决方案是阻止PostgreSQL走那条路线。最简单的方法是将所有索引放在id
上,但鉴于其名称可能不可行。
另一种方法是防止PostgreSQL在计划扫描时“看到” LIMIT
子句:
SELECT id, data, type
FROM (SELECT id, data, type
FROM collection.bundles
WHERE /* all your complicated conditions */
OFFSET 0) subquery
ORDER BY id DESC
LIMIT 50;
备注:您没有显示索引定义,但听起来好像有很多定义,可能太多了。索引非常昂贵,因此请确保仅定义那些可以带来明显好处的索引。