我有4张桌子
create table web_content_3 ( content integer, hits bigint, bytes bigint, appid varchar(32) );
create table web_content_4 ( content character varying (128 ), hits bigint, bytes bigint, appid varchar(32) );
create table web_content_5 ( content character varying (128 ), hits bigint, bytes bigint, appid integer );
create table web_content_6 ( content integer, hits bigint, bytes bigint, appid integer );
我通过大约2百万条记录对组使用相同的查询
即SELECT content, sum(hits) as hits, sum(bytes) as bytes, appid from web_content_{3,4,5,6} GROUP BY content,appid;
结果是:
- Table Name | Content | appid | Time Taken [In ms]
- ===========================================================
- web_content_3 | integer | Character | 27277.931
- web_content_4 | Character | Character | 151219.388
- web_content_5 | Character | integer | 127252.023
- web_content_6 | integer | integer | 5412.096
这里web_content_6查询大约5secs只与其他三个组合比较,使用这个统计数据我们可以说group by的整数,整数组合要快得多但问题是为什么?
我也有EXPLAIN结果,但它确实给了我任何解释web_content_4和web_content_6查询之间的剧烈变化。
在这里。
test=# EXPLAIN ANALYSE SELECT content, sum(hits) as hits, sum(bytes) as bytes, appid from web_content_4 GROUP BY content,appid;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=482173.36..507552.31 rows=17680 width=63) (actual time=138099.612..151565.655 rows=17680 loops=1)
-> Sort (cost=482173.36..487196.11 rows=2009100 width=63) (actual time=138099.202..149256.707 rows=2009100 loops=1)
Sort Key: content, appid
Sort Method: external merge Disk: 152488kB
-> Seq Scan on web_content_4 (cost=0.00..45218.00 rows=2009100 width=63) (actual time=0.010..349.144 rows=2009100 loops=1)
Total runtime: 151613.569 ms
(6 rows)
Time: 151614.106 ms
test=# EXPLAIN ANALYSE SELECT content, sum(hits) as hits, sum(bytes) as bytes, appid from web_content_6 GROUP BY content,appid;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=368814.36..394194.51 rows=17760 width=24) (actual time=3282.333..5840.953 rows=17760 loops=1)
-> Sort (cost=368814.36..373837.11 rows=2009100 width=24) (actual time=3282.176..3946.025 rows=2009100 loops=1)
Sort Key: content, appid
Sort Method: external merge Disk: 74632kB
-> Seq Scan on web_content_6 (cost=0.00..34864.00 rows=2009100 width=24) (actual time=0.011..297.235 rows=2009100 loops=1)
Total runtime: 6172.960 ms
答案 0 :(得分:3)
这种聚合的表现将取决于排序的速度。在所有条件相同的情况下,较大的数据比较短的数据需要更多的时间。 "快速"案例分类为74Mbytes; "慢",152Mbytes。
这会导致性能上的一些差异,但在大多数情况下不会产生30倍的差异。您会看到一个巨大差异的一个案例是较小的数据适合内存而较大的数据不适合内存。溢出到磁盘是很昂贵的。
有人怀疑数据已经按web_content_6(content, appid)
排序或几乎排序。这可能会缩短排序所需的时间。如果你比较实际时间和"成本"对于这两种中的每一种,你都会看到"快速"版本的运行速度比预期的要快(假设成本相当)。
答案 1 :(得分:3)
如果你可以节省内存,你可以告诉PostgreSQL使用更多来进行排序等等。我构建了一个表,用随机数据填充它,并在运行此查询之前对其进行分析。
EXPLAIN ANALYSE
SELECT content, sum(hits) as hits, sum(bytes) as bytes, appid
from web_content_4
GROUP BY content,appid;
"GroupAggregate (cost=364323.43..398360.86 rows=903791 width=96) (actual time=25059.086..29789.234 rows=1998067 loops=1)"
" -> Sort (cost=364323.43..369323.34 rows=1999961 width=96) (actual time=25057.540..27907.143 rows=2000000 loops=1)"
" Sort Key: content, appid"
" Sort Method: external merge Disk: 216016kB"
" -> Seq Scan on web_content_4 (cost=0.00..52472.61 rows=1999961 width=96) (actual time=0.010..475.187 rows=2000000 loops=1)"
"Total runtime: 30012.427 ms"
我得到了你所做的相同的执行计划。在我的例子中,此查询执行外部合并排序,需要大约216MB的磁盘。我可以通过设置work_mem的值来告诉PostgreSQL为此查询留出更多内存。 (以这种方式设置work_mem仅影响我当前的连接。)
set work_mem = '250MB';
EXPLAIN ANALYSE
SELECT content, sum(hits) as hits, sum(bytes) as bytes, appid
from web_content_4
GROUP BY content,appid;
"HashAggregate (cost=72472.22..81510.13 rows=903791 width=96) (actual time=3196.777..4505.290 rows=1998067 loops=1)"
" -> Seq Scan on web_content_4 (cost=0.00..52472.61 rows=1999961 width=96) (actual time=0.019..437.252 rows=2000000 loops=1)"
"Total runtime: 4726.401 ms"
现在PostgreSQL正在使用哈希聚合,执行时间下降了6到30秒到5秒。
我没有测试web_content_6,因为用整数替换文本通常需要几个连接才能恢复文本。所以我不确定我们是在那里比较苹果和苹果。