直接查询比使用join的子查询慢得多

时间:2013-02-28 03:38:49

标签: performance postgresql postgis database-performance

我有2张桌子。它们的结构大致如下;不过我改名了。

CREATE TABLE overlay_polygon
(
  overlay_polygon_id SERIAL PRIMARY KEY,
  some_other_polygon_id INTEGER REFERENCES some_other_polygon (some_other_polygon_id)
  dollar_value NUMERIC,
  geom GEOMETRY(Polygon,26915)
)

CREATE TABLE point
(
  point_id SERIAL PRIMARY KEY,
  some_other_polygon_id INTEGER REFERENCES some_other_polygon (some_other_polygon_id)
  -- A bunch of other fields that this query won't touch
  geom GEOMETRY(Point,26915)
)

point在其geom列上有一个名为spix_point的空间索引,以及在其some_other_polygon_id列上的索引。

point中约有500,000行,point中的几乎所有行都与overlay_polygon中的某些行相交。最初,我的overlay_polygon表包含几行,这些行具有非常小的区域(大多数小于1平方米),并且没有在空间上与point的任何行相交。删除与point中的任何行不相交的小行后,共有38行。

顾名思义,overlay_polygon是由其他3个表(包括some_other_polygon)的多边形叠加而生成的多边形表。特别是,我需要使用dollar_valuepoint上的某些列进行一些计算。当我开始删除没有点相交的行以加快将来的处理时,我最终查询了COUNT行。最明显的查询似乎如下。

SELECT op.*, COUNT(point_id) AS num_points
FROM overlay_polygon op
LEFT JOIN point ON op.some_other_polygon_id = point.some_other_polygon_id AND ST_Intersects(op.geom, point.geom)
GROUP BY op.overlay_polygon_id
ORDER BY op.overlay_polygon_id
;

这是EXPLAIN (ANALYZE, BUFFERS)

GroupAggregate  (cost=544.45..545.12 rows=38 width=8049) (actual time=284962.944..540959.914 rows=38 loops=1)
  Buffers: shared hit=58694 read=17119, temp read=189483 written=189483
  I/O Timings: read=39171.525
  ->  Sort  (cost=544.45..544.55 rows=38 width=8049) (actual time=271754.952..534154.573 rows=415224 loops=1)
        Sort Key: op.overlay_polygon_id
        Sort Method: external merge  Disk: 897016kB
        Buffers: shared hit=58694 read=17119, temp read=189483 written=189483
        I/O Timings: read=39171.525
        ->  Nested Loop Left Join  (cost=0.00..543.46 rows=38 width=8049) (actual time=0.110..46755.284 rows=415224 loops=1)
              Buffers: shared hit=58694 read=17119
              I/O Timings: read=39171.525
              ->  Seq Scan on overlay_polygon op  (cost=0.00..11.38 rows=38 width=8045) (actual time=0.043..153.255 rows=38 loops=1)
                    Buffers: shared hit=1 read=10
                    I/O Timings: read=152.866
              ->  Index Scan using spix_point on point  (cost=0.00..13.99 rows=1 width=200) (actual time=50.229..1139.868 rows=10927 loops=38)
                    Index Cond: (op.geom && geom)
                    Filter: ((op.some_other_polygon_id = some_other_polygon_id) AND _st_intersects(op.geom, geom))
                    Rows Removed by Filter: 13353
                    Buffers: shared hit=58693 read=17109
                    I/O Timings: read=39018.660
Total runtime: 542172.156 ms

然而,我发现这个查询运行得更快,更快:

SELECT *
FROM overlay_polygon
JOIN (SELECT op.overlay_polygon_id, COUNT(point_id) AS num_points
      FROM overlay_polygon op
      LEFT JOIN point ON op.some_other_polygon_id = point.some_other_polygon_id AND ST_Intersects(op.geom, point.geom)
      GROUP BY op.overlay_polygon_id
     ) x ON x.overlay_polygon_id = overlay_polygon.overlay_polygon_id
ORDER BY overlay_polygon.overlay_polygon_id
;

EXPLAIN (ANALYZE, BUFFERS)位于下方。

Sort  (cost=557.78..557.88 rows=38 width=8057) (actual time=18904.661..18904.748 rows=38 loops=1)
  Sort Key: overlay_polygon.overlay_polygon_id
  Sort Method: quicksort  Memory: 126kB
  Buffers: shared hit=58690 read=17134
  I/O Timings: read=9924.328
  ->  Hash Join  (cost=544.88..556.78 rows=38 width=8057) (actual time=18903.697..18904.210 rows=38 loops=1)
        Hash Cond: (overlay_polygon.overlay_polygon_id = op.overlay_polygon_id)
        Buffers: shared hit=58690 read=17134
        I/O Timings: read=9924.328
        ->  Seq Scan on overlay_polygon  (cost=0.00..11.38 rows=38 width=8045) (actual time=0.127..0.411 rows=38 loops=1)
              Buffers: shared hit=2 read=9
              I/O Timings: read=0.173
        ->  Hash  (cost=544.41..544.41 rows=38 width=12) (actual time=18903.500..18903.500 rows=38 loops=1)
              Buckets: 1024  Batches: 1  Memory Usage: 2kB
              Buffers: shared hit=58688 read=17125
              I/O Timings: read=9924.154
              ->  HashAggregate  (cost=543.65..544.03 rows=38 width=8) (actual time=18903.276..18903.379 rows=38 loops=1)
                    Buffers: shared hit=58688 read=17125
                    I/O Timings: read=9924.154
                    ->  Nested Loop Left Join  (cost=0.00..543.46 rows=38 width=8) (actual time=0.052..17169.606 rows=415224 loops=1)
                          Buffers: shared hit=58688 read=17125
                          I/O Timings: read=9924.154
                          ->  Seq Scan on overlay_polygon op  (cost=0.00..11.38 rows=38 width=8038) (actual time=0.004..0.537 rows=38 loops=1)
                                Buffers: shared hit=1 read=10
                                I/O Timings: read=0.279
                          ->  Index Scan using spix_point on point  (cost=0.00..13.99 rows=1 width=200) (actual time=4.422..381.991 rows=10927 loops=38)
                                Index Cond: (op.gopm && gopm)
                                Filter: ((op.some_other_polygon_id = some_other_polygon_id) AND _st_intersects(op.geom, geom))
                                Rows Removed by Filter: 13353
                                Buffers: shared hit=58687 read=17115
                                I/O Timings: read=9923.875
Total runtime: 18905.293 ms

正如您所看到的,他们有可比较的成本估算,但我不确定这些成本估算的准确程度。我对涉及PostGIS功能的成本估算持怀疑态度。自上次修改后和运行查询之前,两个表都已运行VACUUM ANALYZE FULL

也许我根本无法阅读EXPLAIN ANALYZE,但我不明白为什么这些查询的运行时间会有如此大的不同。任何人都可以识别出什我能想到的唯一可能性与LEFT JOIN中涉及的列数有关。

编辑1

根据@ChrisTravers的建议,我增加work_mem并重新启动第一个查询。我不相信这是一个重大改进。

执行的

SET work_mem='4MB';

(1 MB。)

然后执行第一个查询给出了这些结果。

GroupAggregate  (cost=544.45..545.12 rows=38 width=8049) (actual time=339910.046..495775.478 rows=38 loops=1)
  Buffers: shared hit=58552 read=17261, temp read=112133 written=112133
  ->  Sort  (cost=544.45..544.55 rows=38 width=8049) (actual time=325391.923..491329.208 rows=415224 loops=1)
        Sort Key: op.overlay_polygon_id
        Sort Method: external merge  Disk: 896904kB
        Buffers: shared hit=58552 read=17261, temp read=112133 written=112133
        ->  Nested Loop Left Join  (cost=0.00..543.46 rows=38 width=8049) (actual time=14.698..234266.573 rows=415224 loops=1)
              Buffers: shared hit=58552 read=17261
              ->  Seq Scan on overlay_polygon op  (cost=0.00..11.38 rows=38 width=8045) (actual time=14.612..15.384 rows=38 loops=1)
                    Buffers: shared read=11
              ->  Index Scan using spix_point on point  (cost=0.00..13.99 rows=1 width=200) (actual time=95.262..5451.636 rows=10927 loops=38)
                    Index Cond: (op.geom && geom)
                    Filter: ((op.some_other_polygon_id = some_other_polygon_id) AND _st_intersects(op.geom, geom))
                    Rows Removed by Filter: 13353
                    Buffers: shared hit=58552 read=17250
Total runtime: 496936.775 ms

编辑2

嗯,这是一个很好的,很大的气味,我之前没有注意到(主要是因为我无法阅读ANALYZE输出)。对不起,我没有及早发现它。

Sort  (cost=544.45..544.55 rows=38 width=8049) (actual time=271754.952..534154.573 rows=415224 loops=1)

估计行数:38。实际行数:超过400K。想法,任何人?

2 个答案:

答案 0 :(得分:2)

我的直接想法是,这可能与work_mem限制有关。计划之间的区别在于,在第一个,您加入然后聚合,在第二个,您聚合和连接。这意味着您的聚合集更窄,这意味着在该操作上使用的内存更少。

如果您尝试将work_mem加倍并再次尝试,那将会发生什么变化会很有趣。

编辑现在我们知道增加work_mem只会导致适度的改进,下一个问题是排序行估算。我怀疑它实际上超过了work_mem,并且期望这很容易,因为它只需要38行,但是会输出很多行。我并不清楚规划人员获取此信息的位置,因为很明显规划人员(正确地)估计38行是我们预期的汇总行数。对我来说,这部分开始看起来像一个计划错误,但我很难把手指放在上面。可能值得编写并提出pgsql-general电子邮件列表。它几乎让我觉得计划程序在排序所需的内存和聚合所需的内存之间感到困惑。

答案 1 :(得分:1)

正如您EDIT 2中所述,实际上,返回的估计行数和实际行数之间存在很大的不匹配。但问题的根源在于树,在这里:

Index Scan using spix_point on point  (cost=0.00..13.99 rows=1 width=200) 
   (actual time=95.262..5451.636 rows=10927 loops=38)

这会影响树的所有节点,Nested LoopSort

我会尝试执行以下操作:

  1. 首先,确保统计信息是最新的:

    VACUUM ANALYZE point;
    VACUUM ANALYZE overlay_polygon;
    
  2. 如果没有运气,geometry列为increase statistics target

    ALTER TABLE point ALTER geom SET STATISTICS 500;
    ALTER TABLE overlay_polygon ALTER geom SET STATISTICS 1500;
    

    然后再次分析表格。

  3. 恕我直言,Nested Loops在这里不是很好,Hash将是more appropriate。尝试发布:

    SET enable_nestloop TO off;
    

    on the session level并查看是否有帮助。


  4. 在查看了一些关于查询的内容后,我认为值得提升some_other_polygon_id列的统计目标:

    ALTER TABLE point ALTER some_other_polygon_id SET STATISTICS 5000;
    

    另外,我认为没有理由为什么你的第二个查询比第一个查询快得多。 我是否正确地说两个查询只执行一次并且在“冷”数据库上?确实,第二个查询从OS文件系统缓存中获益,因此执行得更快。

    使用spix_point是规划人员的错误决定,因为point 将完整扫描至fullfil LEFT JOIN。因此,改进查询的一种方法可能是在此表上强制Seq Scan。这可以在CTE

    的帮助下完成
    WITH p AS (SELECT point_id, some_other_polygon_id, geom FROM point)
    SELECT op.*, COUNT(p.point_id) AS num_points
      FROM overlay_polygon op
      LEFT JOIN p ON op.some_other_polygon_id = p.some_other_polygon_id
           AND ST_Intersects(op.geom, p.geom)
     GROUP BY op.overlay_polygon_id
     ORDER BY op.overlay_polygon_id;
    

    但这将在物化领域放缓。不过,试一试。