我有这个查询需要86秒才能执行。
select cust_id customer_id,
cust_first_name customer_first_name,
cust_last_name customer_last_name,
cust_prf customer_prf,
cust_birth_country customer_birth_country,
cust_login customer_login,
cust_email_address customer_email_address,
date_year ddyear,
sum(((stock_ls_price-stock_ws_price-stock_ds_price)+stock_es_price)/2) total_yr,
's' stock_type
from customer, stock, date
where customer_k = stock_customer_k
and stock_soldate_k = date_k
group by cust_id, cust_first_name, cust_last_name, cust_prf, cust_birth_country, cust_login, cust_email_address, date_year;
EXPLAIN ANALYZE RESULT:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=639753.55..764040.06 rows=2616558 width=213) (actual time=81192.575..86536.398 rows=190581 loops=1)
Group Key: customer.cust_id, customer.cust_first_name, customer.cust_last_name, customer.cust_prf, customer.cust_birth_country, customer.cust_login, customer.cust_email_address, date.date_year
-> Sort (cost=639753.55..646294.95 rows=2616558 width=213) (actual time=81192.468..83977.960 rows=2685453 loops=1)
Sort Key: customer.cust_id, customer.cust_first_name, customer.cust_last_name, customer.cust_prf, customer.cust_birth_country, customer.cust_login, customer.cust_email_address, date.date_year
Sort Method: external merge Disk: 460920kB
-> Hash Join (cost=6527.66..203691.58 rows=2616558 width=213) (actual time=60.500..2306.082 rows=2685453 loops=1)
Hash Cond: (stock.stock_customer_k = customer.customer_k)
-> Merge Join (cost=1423.66..144975.59 rows=2744641 width=30) (actual time=8.820..1412.109 rows=2750311 loops=1)
Merge Cond: (date.date_k = stock.stock_soldate_k)
-> Index Scan using date_key_idx on date (cost=0.29..2723.33 rows=73049 width=8) (actual time=0.013..7.164 rows=37622 loops=1)
-> Index Scan using stock_soldate_k_index on stock (cost=0.43..108829.12 rows=2880404 width=30) (actual time=0.004..735.043 rows=2750312 loops=1)
-> Hash (cost=3854.00..3854.00 rows=100000 width=191) (actual time=51.650..51.650rows=100000 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 16139kB
-> Seq Scan on customer (cost=0.00..3854.00 rows=100000 width=191) (actual time=0.004..30.341 rows=100000 loops=1)
Planning time: 1.761 ms
Execution time: 86621.807 ms
我有work_mem=512MB
。我创建了索引
cust_id
,customer_k
,stock_customer_k
,stock_soldate_k
和date_k
。
customer
约有100,000行,stock
约有3,000,000行,date
约有80,000行。
如何让此查询运行得更快? 我将不胜感激任何帮助!
表定义
日期
Column | Type | Modifiers
---------------------+---------------+-----------
date_k | integer | not null
date_id | character(16) | not null
date_date | date |
date_year | integer |
股票
Column | Type | Modifiers
-----------------------+--------------+-----------
stock_soldate_k | integer |
stock_soltime_k | integer |
stock_customer_k | integer |
stock_ds_price | numeric(7,2) |
stock_es_price | numeric(7,2) |
stock_ls_price | numeric(7,2) |
stock_ws_price | numeric(7,2) |
客户:
Column | Type | Modifiers
---------------------------+-----------------------+-----------
customer_k | integer | not null
customer_id | character(16) | not null
cust_first_name | character(20) |
cust_last_name | character(30) |
cust_prf | character(1) |
cust_birth_country | character varying(20) |
cust_login | character(13) |
cust_email_address | character(50) |
TABLE "stock" CONSTRAINT "st1" FOREIGN KEY (stock_soldate_k) REFERENCES date(date_k)
"st2" FOREIGN KEY (stock_customer_k) REFERENCES customer(customer_k)
答案 0 :(得分:1)
您获得的巨大性能损失是外部存储大约450MB的中间数据:Sort Method: external merge Disk: 460920kB
。发生这种情况是因为规划器首先需要在聚合customer
发生之前满足3个表之间的连接条件,包括可能效率低下的表sum()
,即使聚合可以很好地执行仅stock
表。
因为您的表格相当大,所以最好尽快减少符合条件的行数,最好在加入之前减少。在这种情况下,这意味着在子查询中对表stock
进行聚合,并将该结果连接到其他两个表:
SELECT c.cust_id AS customer_id,
c.cust_first_name AS customer_first_name,
c.cust_last_name AS customer_last_name,
c.cust_prf AS customer_prf,
c.cust_birth_country AS customer_birth_country,
c.cust_login AS customer_login,
c.cust_email_address AS customer_email_address,
d.date_year AS ddyear,
ss.total_yr,
's' stock_type
FROM (
SELECT
stock_customer_k AS ck,
stock_soldate_k AS sdk,
sum((stock_ls_price-stock_ws_price-stock_ds_price+stock_es_price)*0.5) AS total_yr
FROM stock
GROUP BY 1, 2) ss
JOIN customer c ON c.customer_k = ss.ck
JOIN date d ON d.date_k = ss.sdk;
stock
上的子查询将导致更少的行,具体取决于每个客户每个日期的平均订单数。此外,在sum()
函数中,乘以0.5比除以2便宜得多(尽管在宏观方案中它将相对较小)。
您还应该认真对待您的数据模型。
在表customer
中,您使用的数据类型如char(30)
,即使只存储'X',您的行中也会占用30个字节。当许多字符串短于声明的最大宽度时,使用varchar(30)
数据类型会更有效,因为它占用的空间更少,因此需要更少的页面读取(以及对中间数据的写入)。
表stock
使用numeric(7,2)
表示价格。使用numeric
数据类型可以在对数据进行多次重复操作时提供准确的结果,但它们也非常慢。在您的方案中,double precision
数据类型将更快,同样准确。出于演示目的,您可以将值四舍五入为所需的精度。
作为建议,请创建一个包含stock_f
数据类型而不是double precision
的表numeric
,将所有数据从stock
复制到stock_f
并运行查询该表。
答案 1 :(得分:1)
试试这个:
activity_main.xml
此查询预计对联接进行分组。