许多列优化

时间:2017-06-02 12:27:02

标签: sql postgresql join query-optimization

我正在运行PostgresSQL 9.6.2,并且有一个包含7列的表,大约有2,900,000行。该表是临时的,它是主题重复数据删除过程的一部分,其目的是基于不同的规则集将新的id(s_id_new)分配给相同的主题。总的来说,我执行内连接大约10-12次,每次都在类似但略有不同的数据子集/不同的WHERE条件/不同的连接列。

现在,查询效率很低,没有完成(必须在2小时后取消)。

出于优化的目的,我创建了一个数据子集(50 000行)。

\d subject_subset;
     Column     |          Type          | Modifiers
----------------+------------------------+-----------
 s_id           | text                   |
 surname_clean  | character varying(20)  |
 name_clean     | character varying(20)  |
 fullname_clean | character varying(100) |
 id1            | character varying(20)  |
 id2            | character varying(20)  |
 id3            | character varying(20)  |
 s_id_new       | character varying(20)  |
Indexes:
    "subject_subset_s_id_new_idx" btree (s_id_new)

我正在尝试优化的查询

select s_id_new, max(I_s_id) as s_id_deduplicated
from (select a.*, b.s_id_new as I_s_id
                from public.subject_subset  a
                inner join public.subject_subset b on a.surname_clean=b.surname_clean
                and a.id2=b.id2
                where
                    a.id1 is null 
                    and a.id2 is not null 
                    and a.surname_clean is not null ) h
 group by s_id_new;



The result of the EXPLAIN ANALYZE:
https://explain.depesz.com/s/7knH

"GroupAggregate  (cost=5616.65..5620.39 rows=142 width=90) (actual time=32542.127..46938.858 rows=2889 loops=1)"
"  Group Key: a.s_id_new"
"  ->  Sort  (cost=5616.65..5617.42 rows=310 width=116) (actual time=32542.116..43194.626 rows=18356220 loops=1)"
"        Sort Key: a.s_id_new"
"        Sort Method: external merge  Disk: 531760kB"
"        ->  Hash Join  (cost=1114.72..5603.82 rows=310 width=116) (actual time=13.159..4892.011 rows=18356220 loops=1)"
"              Hash Cond: (((b.surname_clean)::text = (a.surname_clean)::text) AND ((b.id2)::text = (a.id2)::text))"
"              ->  Seq Scan on subject_subset b  (cost=0.00..1111.00 rows=50000 width=174) (actual time=0.011..10.775 rows=50000 loops=1)"
"              ->  Hash  (cost=1111.00..1111.00 rows=248 width=174) (actual time=13.137..13.137 rows=15044 loops=1)"
"                    Buckets: 16384 (originally 1024)  Batches: 1 (originally 1)  Memory Usage: 1151kB"
"                    ->  Seq Scan on subject_subset a  (cost=0.00..1111.00 rows=248 width=174) (actual time=0.005..9.330 rows=15044 loops=1)"
"                          Filter: ((id1 IS NULL) AND (id2 IS NOT NULL) AND (surname_clean IS NOT NULL))"
"                          Rows Removed by Filter: 34956"
"Planning time: 0.236 ms"
"Execution time: 47013.839 ms"

据我所知,子查询的SORT引起了问题,在整个表排序时消耗了大量空间,但我无法弄清楚如何优化它。

唯一带来轻微性能提升的是使用dense_rank分配新的整数ID,但这还不够。

1 个答案:

答案 0 :(得分:0)

大件事正在杀死你。

我有三个建议:

  1. 运行ANALYZE subject_subset以收集表的表统计信息。 临时表格不会自动收集统计数据,在您的情况下估算值很低。

    也许这足以让它变得更好!

  2. 尝试使用(id2, surname_clean, s_id_new)上的索引,这有助于嵌套循环连接(不知道它是否更快)。

    您可以尝试横向连接,例如

    SELECT a.s_id_new,
           max(b.i_s_id) AS s_id_deduplicated
    FROM subject_subset a
       CROSS JOIN LATERAL (SELECT s_id_new AS i_s_id
                           FROM subject_subset
                           WHERE a.surname_clean = surname_clean
                             AND a.id2 = id2
                           ORDER BY s_id_new DESC
                           LIMIT 1
                          ) b
    GROUP BY a.s_id_new;
    

    嵌套循环连接会很昂贵,但排序应该很快。

  3. 坚持使用散列连接,但减少行数:

    SELECT a.s_id_new,
           max(b.i_s_id) AS s_id_deduplicated
    FROM subject_subset a
       JOIN (SELECT surname_clean, id2,
                    max(s_id_new) AS i_s_id
             FROM subject_subset
             GROUP BY surname_clean, id2
            ) b
          USING (surname_clean, id2)
    WHERE a.id1 IS NULL 
      AND a.id2 IS NOT NULL
      AND a.surname_clean IS NOT NULL
    GROUP BY a.s_id_new;
    

    也许(surname_clean, id2)上的索引可以提供帮助,但不确定。