我有一个查询,我需要按列对结果进行排序。如果按ID排序,它的工作速度非常快(2.8毫秒)。但是,如果我尝试按其他任何列进行排序(甚至建立索引),查询执行时间也会增加(800毫秒)。我可以在说明中看到,按id排序使用的是索引扫描,如果按reg_date排序,则可以进行Seq扫描。
这是我的索引。我还重新索引了表格。
+--------------------+------------------------------------------------------------------------+
| indexname | indexdef |
+--------------------+------------------------------------------------------------------------+
| pk_users | CREATE UNIQUE INDEX pk_users ON public.users USING btree (id) |
| idx_users_reg_date | CREATE INDEX idx_users_end_date ON public.users USING btree (reg_date) |
+--------------------+------------------------------------------------------------------------+
如果我按ID订购,则执行时间为 2.601毫秒
select
users.id,
users.full_name,
sum(user_comments.badges) as badges,
count(user_comments) as comment_count
from
users
left join user_comments
on users.id = user_comments.user_id
group by users.id
order by users.id
limit 10
但是,如果我按users.reg_date列(具有索引)进行排序,则大约在 818.336毫秒
select
users.id,
users.full_name,
sum(user_comments.badges) as badges,
count(user_comments) as comment_count
from
users
left join user_comments
on users.id = user_comments.user_id
group by users.id
order by users.reg_date
limit 10;
QUERY PLAN
Limit (cost=73954.85..73954.88 rows=10 width=328) (actual time=614.913..614.914 rows=10 loops=1)
Buffers: shared hit=9 read=25307, temp read=6671 written=6671
-> Sort (cost=73954.85..74216.20 rows=104539 width=328) (actual time=614.912..614.912 rows=10 loops=1)
Sort Key: users.reg_date
Sort Method: top-N heapsort Memory: 25kB
Buffers: shared hit=9 read=25307, temp read=6671 written=6671
-> GroupAggregate (cost=67941.35..71695.80 rows=104539 width=328) (actual time=432.031..598.345 rows=104539 loops=1)
Buffers: shared hit=6 read=25307, temp read=6671 written=6671
-> Merge Left Join (cost=67941.35..69866.37 rows=104539 width=328) (actual time=432.019..535.760 rows=161688 loops=1)
Merge Cond: (users.id = user_comments.user_id)
Buffers: shared hit=6 read=25307, temp read=6671 written=6671
-> Sort (cost=33360.14..33621.49 rows=104539 width=8) (actual time=267.480..292.054 rows=104539 loops=1)
Sort Key: users.id
Sort Method: external merge Disk: 1408kB
Buffers: shared hit=4 read=22164, temp read=181 written=181
-> Seq Scan on users (cost=0.00..23213.39 rows=104539 width=8) (actual time=0.012..202.277 rows=104539 loops=1)
Buffers: shared hit=4 read=22164
-> Materialize (cost=34581.21..34981.87 rows=80133 width=324) (actual time=164.533..205.544 rows=80155 loops=1)
Buffers: shared hit=2 read=3143, temp read=6490 written=6490
-> Sort (cost=34581.21..34781.54 rows=80133 width=324) (actual time=164.525..193.679 rows=80155 loops=1)
Sort Key: user_comments.user_id
Sort Method: external merge Disk: 24048kB
Buffers: shared hit=2 read=3143, temp read=6490 written=6490
-> Seq Scan on user_comments (cost=0.00..3946.33 rows=80133 width=324) (actual time=0.028..48.802 rows=80155 loops=1)
Buffers: shared hit=2 read=3143
Total runtime: 619.567 ms
答案 0 :(得分:0)
正如其中一条评论中提到的那样,磁盘上存在一些排序,“排序方法:外部合并磁盘:24048kB”。
应尽可能避免这样做,因此,如果您有足够的内存,则可以增加work_mem。默认值为“ 4MB”。
请记住,如果将work_mem设置得很大并且有很多查询同时工作,则可能会耗尽系统内存。
要查看日志文件中临时文件的使用情况,还应该设置“ log_temp_files = 0”
答案 1 :(得分:0)
横向连接是否可以改善?
select u.id, u.full_name,
uc.badges, uc.comment_count
from users u left join lateral
(select sum(uc.badges) as badges, count(*) as comment_count
from user_comments uc
where u.id = uc.user_id
) uc
order by u.reg_date
limit 10
答案 2 :(得分:-1)
首先。为什么表“ user_itens”出现在您的EXPLAIN上?您在此查询中使用任何视图吗?
当您在GROUP BY和ORDER BY上使用相同的键,并且您的键是UNIQUE时,当然可以使Postgres使用更好的计划。
在此会话中尝试为WORK_MEM使用更高的值,然后重试(使用comand SET work_mem = 64MB或类似的其他值)。然后多次运行查询。