为什么我有一个PostgreSQL服务器进行Hash Join而另一个进行嵌套循环半连接?

时间:2011-06-15 19:55:56

标签: postgresql

我有两台不同的机器都使用相同的数据运行psql 8.4.7(使用pg_restore),但是我从两者获得了不同的查询。两台机器之间还存在其他功能差异(一台是CentOS,另一台是Ubuntu,它们使用不同的gcc),但我认为它们或多或少使用相同的逻辑。

一台机器正在使用Hash Join,它的速度超快(50ms)。另一个使用嵌套循环并且需要超长(10s)。关于如何深入了解这一点的任何暗示?

一个人:

database_production=> EXPLAIN ANALYZE SELECT tags.*, COUNT(*) AS count FROM "tags" LEFT OUTER JOIN taggings ON tags.id = taggings.tag_id AND taggings.context = 'categories' INNER JOIN vendors ON vendors.id = taggings.taggable_id WHERE (taggings.taggable_type = 'Vendor') AND (taggings.taggable_id IN(SELECT vendors.id FROM "vendors" INNER JOIN "deals" ON "deals"."vendor_id" = "vendors"."id" INNER JOIN "programs" ON "programs"."id" = "deals"."program_id" INNER JOIN "memberships" ON "memberships"."program_id" = "programs"."id" WHERE (memberships.user_id = 1) AND (vendors.id NOT IN (SELECT vendor_id FROM vendor_ignores WHERE user_id = 1)))) GROUP BY tags.id, tags.name HAVING COUNT(*) > 0 ORDER BY count DESC;
                                                                                   QUERY PLAN                                                                                   
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=164.89..164.89 rows=1 width=520) (actual time=9444.003..9444.009 rows=6 loops=1)
   Sort Key: (count(*))
   Sort Method:  quicksort  Memory: 17kB
   ->  HashAggregate  (cost=164.86..164.88 rows=1 width=520) (actual time=9443.936..9443.942 rows=6 loops=1)
         Filter: (count(*) > 0)
         ->  Nested Loop Semi Join  (cost=14.92..164.85 rows=1 width=520) (actual time=67.355..9443.645 rows=94 loops=1)
               Join Filter: (public.vendors.id = deals.vendor_id)
               ->  Nested Loop  (cost=9.35..29.93 rows=1 width=528) (actual time=3.570..154.104 rows=7636 loops=1)
                     ->  Nested Loop  (cost=9.35..21.65 rows=1 width=524) (actual time=3.534..83.165 rows=7636 loops=1)
                           ->  Bitmap Heap Scan on taggings  (cost=9.35..13.37 rows=1 width=8) (actual time=3.476..12.277 rows=7636 loops=1)
                                 Recheck Cond: (((taggable_type)::text = 'Vendor'::text) AND ((context)::text = 'categories'::text))
                                 ->  BitmapAnd  (cost=9.35..9.35 rows=1 width=0) (actual time=3.410..3.410 rows=0 loops=1)
                                       ->  Bitmap Index Scan on index_taggings_on_taggable_type  (cost=0.00..4.55 rows=40 width=0) (actual time=1.664..1.664 rows=7636 loops=1)
                                             Index Cond: ((taggable_type)::text = 'Vendor'::text)
                                       ->  Bitmap Index Scan on index_taggings_on_context  (cost=0.00..4.55 rows=40 width=0) (actual time=1.727..1.727 rows=7648 loops=1)
                                             Index Cond: ((context)::text = 'categories'::text)
                           ->  Index Scan using tags_pkey on tags  (cost=0.00..8.27 rows=1 width=520) (actual time=0.004..0.005 rows=1 loops=7636)
                                 Index Cond: (tags.id = taggings.tag_id)
                     ->  Index Scan using vendors_pkey on vendors  (cost=0.00..8.27 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=7636)
                           Index Cond: (public.vendors.id = taggings.taggable_id)
               ->  Nested Loop  (cost=5.57..134.62 rows=24 width=8) (actual time=0.035..1.117 rows=93 loops=7636)
                     ->  Nested Loop  (cost=4.54..100.19 rows=24 width=4) (actual time=0.028..0.344 rows=93 loops=7636)
                           ->  Nested Loop  (cost=0.00..9.57 rows=1 width=8) (actual time=0.010..0.035 rows=3 loops=7636)
                                 ->  Seq Scan on memberships  (cost=0.00..1.29 rows=1 width=4) (actual time=0.004..0.009 rows=3 loops=7636)
                                       Filter: (user_id = 1)
                                 ->  Index Scan using programs_pkey on programs  (cost=0.00..8.27 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=22810)
                                       Index Cond: (programs.id = memberships.program_id)
                           ->  Bitmap Heap Scan on deals  (cost=4.54..90.16 rows=37 width=8) (actual time=0.012..0.042 rows=31 loops=22810)
                                 Recheck Cond: (deals.program_id = programs.id)
                                 ->  Bitmap Index Scan on index_deals_on_program_id  (cost=0.00..4.53 rows=37 width=0) (actual time=0.008..0.008 rows=31 loops=22810)
                                       Index Cond: (deals.program_id = programs.id)
                     ->  Index Scan using vendors_pkey on vendors  (cost=1.03..1.42 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=713413)
                           Index Cond: (public.vendors.id = deals.vendor_id)
                           Filter: (NOT (hashed SubPlan 1))
                           SubPlan 1
                             ->  Seq Scan on vendor_ignores  (cost=0.00..1.02 rows=1 width=4) (actual time=0.017..0.017 rows=0 loops=1)
                                   Filter: (user_id = 1)
 Total runtime: 9444.501 ms
(38 rows)

另一方面,结果实际显示在一个单独的屏幕中:

    QUERY PLAN                                                                                        
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=977.70..978.94 rows=496 width=23) (actual time=49.859..49.863 rows=6 loops=1)
   Sort Key: (count(*))
   Sort Method:  quicksort  Memory: 17kB
   ->  HashAggregate  (cost=946.81..955.49 rows=496 width=23) (actual time=49.793..49.801 rows=6 loops=1)
         Filter: (count(*) > 0)
         ->  Hash Join  (cost=440.07..875.99 rows=7082 width=23) (actual time=28.661..49.599 rows=94 loops=1)
               Hash Cond: (taggings.tag_id = tags.id)
               ->  Hash Join  (cost=424.91..763.45 rows=7082 width=4) (actual time=27.388..48.105 rows=94 loops=1)
                     Hash Cond: (taggings.taggable_id = public.vendors.id)
                     ->  Seq Scan on taggings  (cost=0.00..184.72 rows=7378 width=8) (actual time=0.030..12.791 rows=7636 loops=1)
                           Filter: (((context)::text = 'categories'::text) AND ((taggable_type)::text = 'Vendor'::text))
                     ->  Hash  (cost=331.68..331.68 rows=7458 width=12) (actual time=27.226..27.226 rows=94 loops=1)
                           ->  Nested Loop  (cost=134.67..331.68 rows=7458 width=12) (actual time=26.056..27.084 rows=94 loops=1)
                                 ->  HashAggregate  (cost=134.67..134.98 rows=31 width=8) (actual time=26.047..26.153 rows=94 loops=1)
                                       ->  Nested Loop  (cost=1.03..134.59 rows=31 width=8) (actual time=24.422..25.890 rows=94 loops=1)
                                             ->  Nested Loop  (cost=0.00..49.95 rows=59 width=4) (actual time=14.902..15.359 rows=94 loops=1)
                                                   ->  Nested Loop  (cost=0.00..9.57 rows=1 width=8) (actual time=0.108..0.143 rows=3 loops=1)
                                                         ->  Seq Scan on memberships  (cost=0.00..1.29 rows=1 width=4) (actual time=0.050..0.057 rows=3 loops=1)
                                                               Filter: (user_id = 1)
                                                         ->  Index Scan using programs_pkey on programs  (cost=0.00..8.27 rows=1 width=4) (actual time=0.020..0.022 rows=1 loops=3)
                                                               Index Cond: (programs.id = memberships.program_id)
                                                   ->  Index Scan using index_deals_on_program_id on deals  (cost=0.00..39.64 rows=59 width=8) (actual time=4.943..5.005 rows=31 loops=3)
                                                         Index Cond: (deals.program_id = programs.id)
                                             ->  Index Scan using vendors_pkey on vendors  (cost=1.03..1.42 rows=1 width=4) (actual time=0.106..0.108 rows=1 loops=94)
                                                   Index Cond: (public.vendors.id = deals.vendor_id)
                                                   Filter: (NOT (hashed SubPlan 1))
                                                   SubPlan 1
                                                     ->  Seq Scan on vendor_ignores  (cost=0.00..1.02 rows=1 width=4) (actual time=0.022..0.022 rows=0 loops=1)
                                                           Filter: (user_id = 1)
                                 ->  Index Scan using vendors_pkey on vendors  (cost=0.00..6.33 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=94)
                                       Index Cond: (public.vendors.id = deals.vendor_id)
               ->  Hash  (cost=8.96..8.96 rows=496 width=23) (actual time=1.257..1.257 rows=496 loops=1)
                     ->  Seq Scan on tags  (cost=0.00..8.96 rows=496 width=23) (actual time=0.051..0.619 rows=496 loops=1)
 Total runtime: 50.357 ms
(34 rows)

2 个答案:

答案 0 :(得分:11)

查询计划中记录的成本差别很大,Postgres根据索引统计数据计算得出。查询计划程序会根据哪种操作最有效地考虑此信息。

如果每个数据库中有不同的数据,则索引统计信息将有所不同,从而产生不同的查询计划。如果数据相同,您可能只是拥有一个或两个数据库的过时索引统计信息。 VACUUM ANALYZE相关表格,然后重试。

编辑:显然VACUUM ANALYZE在你的情况下做了伎俩。后续步骤:

  1. 确保autovacuum正常工作
    • 您正在使用8.4,因此默认配置可能没问题。但是,对于较旧版本的postgres,启用和调整autovaccum更具挑战性
  2. 在大写入(删除/插入大量行或从pg_dump导入)之后,运行VACUUM ANALYZE来更新索引。

答案 1 :(得分:0)

根据Postgres' wiki

  

散列子计划很快,但是计划者只允许该计划用于较小的结果集;普通子计划令人发指地缓慢(实际上是O(N²))。这意味着性能在小规模测试中看起来不错,但是一旦超过大小阈值,速度就会降低5个或更多数量级。您希望发生这种情况。

因此,最慢的数据库结果可能只是比Postgres认为“太大而无法散列”的阈值大。

解决方案可能是Postgres中的increase your worker_mem setting,默认为4MB,直到Postgres可以方便地对子查询进行散列为止。