Postgresql查询太慢

时间:2017-03-06 04:24:45

标签: postgresql indexing

我使用PostgreSQL来存储我的数据库,我还创建索引来加快查询时间。

在表上创建索引后,查询运行速度非常快,每个查询大约1.5秒。 但是,几天后,查询运行得太低,每个查询大约20-28秒。

我试图再次删除索引然后重新创建索引。查询再次运行得很快?

你能帮我解决这个问题吗?或者你对这个问题有任何理想吗?

P / S:这是查询:

SELECT category,
       video_title AS title,
       event_count AS contentView,
       VOD_GROUPBY_ANDSORT.rank
FROM
  (SELECT VOD_SORTBY_VIEW.category,
          VOD_SORTBY_VIEW.video_title,
          VOD_SORTBY_VIEW.event_count,
          ROW_NUMBER() OVER(PARTITION BY VOD_SORTBY_VIEW.category
                            ORDER BY VOD_SORTBY_VIEW.event_count DESC) AS RN,
          DENSE_RANK() OVER(
                            ORDER BY VOD_SORTBY_VIEW.category ASC) AS rank
   FROM
     (SELECT VOD.category AS category,
             VOD.video_title,
             SUM(INV.event_count) AS event_count
      FROM
        (SELECT content_hash.hash_value,
                VODCT.category,
                VODCT.video_title
         FROM
           (SELECT vod_content.content_id,
                   vod_content.category,
                   vod_content.video_title
            FROM vod_content
                WHERE vod_content.category IS NOT NULL) VODCT
         LEFT JOIN content_hash ON content_hash.content_value = VODCT.content_id) VOD
      LEFT JOIN inventory_stats INV ON INV.hash_value = VOD.hash_value
      WHERE TIME BETWEEN '2017-02-06 08:00:00'::TIMESTAMP AND '2017-03-06 08:00:00'::TIMESTAMP
      GROUP BY VOD.category,
               VOD.video_title ) VOD_SORTBY_VIEW ) VOD_GROUPBY_ANDSORT
WHERE RN <= 3
  AND event_count > 100
ORDER BY category

以下是分析:

"QUERY PLAN"
"Subquery Scan on vod_groupby_andsort  (cost=368586.86..371458.16   rows=6381 width=63) (actual time=19638.213..19647.468 rows=177 loops=1)"
"  Filter: ((vod_groupby_andsort.rn <= 3) AND  (vod_groupby_andsort.event_count > 100))"
"  Rows Removed by Filter: 10246"
"  ->  WindowAgg  (cost=368586.86..370596.77 rows=57426 width=71)  (actual time=19638.199..19646.856 rows=10423 loops=1)"
"        ->  WindowAgg  (cost=368586.86..369735.38 rows=57426 width=63) (actual time=19638.194..19642.030 rows=10423 loops=1)"
"              ->  Sort  (cost=368586.86..368730.43 rows=57426 width=55) (actual time=19638.185..19638.984 rows=10423 loops=1)"
"                    Sort Key: vod_sortby_view.category, vod_sortby_view.event_count DESC"
"                    Sort Method: quicksort  Memory: 1679kB"
"                    ->  Subquery Scan on vod_sortby_view  (cost=350535.62..362084.01 rows=57426 width=55) (actual  time=16478.589..19629.400 rows=10423 loops=1)"
"                          ->  GroupAggregate  (cost=350535.62..361509.75 rows=57426 width=55) (actual time=16478.589..19628.381 rows=10423 loops=1)"
"                                Group Key: vod_content.category, vod_content.video_title"
"                                ->  Sort  (cost=350535.62..353135.58 rows=1039987 width=51) (actual time=16478.570..19436.741 rows=1275817 loops=1)"
"                                      Sort Key: vod_content.category, vod_content.video_title"
"                                      Sort Method: external merge  Disk: 76176kB"
"                                      ->  Hash Join  (cost=95179.29..175499.62 rows=1039987 width=51) (actual time=299.040..807.418 rows=1275817 loops=1)"
"                                            Hash Cond: (inv.hash_value = content_hash.hash_value)"
"                                            ->  Bitmap Heap Scan on inventory_stats inv  (cost=48708.84..114604.81 rows=1073198 width=23) (actual time=133.873..269.249 rows=1397466 loops=1)"
"                                                  Recheck Cond: ((""time"" >= '2017-02-06 08:00:00'::timestamp without time zone) AND (""time"" <= '2017-03-06 08:00:00'::timestamp without time zone))"
"                                                  Heap Blocks: exact=11647"
"                                                  ->  Bitmap Index Scan on inventory_stats_pkey  (cost=0.00..48440.54 rows=1073198 width=0) (actual time=132.113..132.113 rows=1397466 loops=1)"
"                                                        Index Cond: ((""time"" >= '2017-02-06 08:00:00'::timestamp without time zone) AND (""time"" <= '2017-03-06 08:00:00'::timestamp without time zone))"
"                                            ->  Hash  (cost=46373.37..46373.37 rows=7766 width=66) (actual time=165.125..165.125 rows=13916 loops=1)"
"                                                  Buckets: 16384 (originally 8192)  Batches: 1 (originally 1)  Memory Usage: 1505kB"
"                                                  ->  Nested Loop  (cost=1.72..46373.37 rows=7766 width=66) (actual time=0.045..159.441 rows=13916 loops=1)"
"                                                        ->  Seq Scan on content_hash  (cost=0.00..389.14 rows=8014 width=43) (actual time=0.007..2.185 rows=16365 loops=1)"
"                                                        ->  Bitmap Heap Scan on vod_content  (cost=1.72..5.73 rows=1 width=72) (actual time=0.009..0.009 rows=1 loops=16365)"
"                                                              Recheck Cond: (content_id = content_hash.content_value)"
"                                                              Filter: (category IS NOT NULL)"
"                                                              Rows Removed by Filter: 0"
"                                                              Heap Blocks: exact=15243"
"                                                              ->  Bitmap Index Scan on vod_content_pkey  (cost=0.00..1.72 rows=1 width=0) (actual time=0.007..0.007 rows=1 loops=16365)"
"                                                                    Index Cond: (content_id = content_hash.content_value)"
"Planning time: 1.665 ms"
"Execution time: 19655.693 ms"

2 个答案:

答案 0 :(得分:0)

您可能需要更积极地vacuumanalyze您的表格,特别是如果您正在进行大量删除和更新。

删除或更新某行时,不会将其删除,只是标记为过时。 vacuum清理这些死行。

analyze更新有关查询计划程序使用的数据的统计信息。

通常这些是由autovacuum daemon运行的。它可能已经被禁用,或者它的运行频率不够。

有关详细信息,请参阅this blog about Slow PostgreSQL PerformancePostgreSQL docs about Routine Vacuuming

答案 1 :(得分:0)

以下是对查询的精简版本的尝试。我不是说它更快。此外,由于我无法运行它,可能会出现一些问题。

由于第二次连接的时间值是必需的,因此左连接转换为内部。

另外,我很好奇dense_rank函数的用途是什么。看起来你正在获得每个类别的前三个标题,然后根据字母数字排序给同一类别的标题都是相同的数字?该类别已经为它们提供了唯一的公共标识符。

SELECT category, video_title AS title, event_count AS contentView,
    DENSE_RANK() OVER(ORDER BY v.category ASC) AS rank
FROM (
    SELECT c.category, c.video_title,
        SUM(i.event_count) AS event_count,
        ROW_NUMBER() OVER(PARTITION BY category ORDER BY sum(i.event_count) DESC) AS rn
    FROM vod_content c
    JOIN content_hash h ON h.content_value = c.content_id
    JOIN inventory_stats i ON i.hash_value = v.hash_value
    where c.category is not null
        and i.time BETWEEN '2017-02-06 08:00:00'::TIMESTAMP AND '2017-03-06 08:00:00'::TIMESTAMP
    GROUP BY c.category, c.video_title
) v
where  rn <= 3 and event_count > 100
ORDER BY category