我有一个rails 3应用程序,其中5个表嵌套在2个级别(table1有很多> table2有很多>表3),持有大量信息。可以把它想象成一个网站访问者的跟踪系统,其中保存了大量数据并且需要快速保存,并且当我们检索数据以进行显示时会执行大量查询,因为计数是为了提取数据。 / p>
我创建了我的应用程序时没有考虑太多sql,只是为了让它继续下去然后我想我会开始优化数据库部分,因为有数据可以使用。
我现在已经在我的所有表格中总计了大约100万条记录,我认为是时候开始优化了,因为我每次请求的响应时间为1秒。
我的rails应用程序会对每个计数执行查询,而不涉及任何联接。只是像user.websites.hits这样的默认行为(选择用户然后执行另一个选择以获取网站,并且每个网站执行选择以获取访问者的数量)。总的来说,我认为它会产生大约80个查询来获取我的页面结果(我知道...)以及我需要的所有内容,因此我创建了一个查询,它可以从单个请求中获取所有结果。
问题是,当我在我的数据库管理中运行查询时,在页面设法执行80个查询,加载模板和资产并在1.1秒内渲染时,需要大约2秒钟来获取。
我不是数据库专业人员,但我的查询不好或有时最好不要像我一样在多个表中使用连接。如果我的数据继续以这种方式增长,我的连接查询会变得更快,或者两个测试都会加载得更慢吗?
我在该查询的所有连接点和WHERE字段上都有索引,因此我认为这不是问题。
我考虑过缓存,但我觉得1 mil的小数据记录开始这样做还为时过早。
有任何建议吗?
domain -> has_many: channels(we use it for split testing)
channel -> has_many: visits, visitors (unique visits by ip), sales
product -> has_many: visits, visitors (unique visits by ip), sales
The query tries to get the domains which includes:
domain_name,
channels_count,
visits_count,
visitors_count,
sales_count and
products_count via the visits table
ACTUAL QUERY:
SELECT
domains.id,
domains.domain,
COUNT(distinct kc.id) AS visits_count,
COUNT(distinct kv.id) AS visits_count,
COUNT(distinct kv.ip_address) AS visitors_count,
COUNT(distinct kp.id) AS products_count,
COUNT(distinct ks.id) AS sales_count
FROM
domains
LEFT JOIN
channels AS kc ON domains.id=kc.domain_id
LEFT JOIN
visits AS kv ON kc.id=kv.channel_id
LEFT JOIN
products AS kp ON kv.product_id=kp.id
LEFT JOIN
sales AS ks ON kc.id=ks.channel_id
WHERE
(domains.user_id=2)
GROUP BY
domains.id
LIMIT 20
OFFSET 0
"QUERY PLAN"
"Limit (cost=7449.20..18656.41 rows=20 width=50) (actual time=947.837..5093.929 rows=20 loops=1)"
" -> GroupAggregate (cost=7449.20..20897.86 rows=24 width=50) (actual time=947.832..5093.845 rows=20 loops=1)"
" -> Merge Left Join (cost=7449.20..17367.45 rows=282413 width=50) (actual time=947.463..4661.418 rows=99940 loops=1)"
" Merge Cond: (domains.id = kc.domain_id)"
" Filter: (kc.deleted_at IS NULL)"
" -> Index Scan using domains_pkey on domains (cost=0.00..12.67 rows=24 width=30) (actual time=0.022..0.146 rows=21 loops=1)"
" Filter: ((deleted_at IS NULL) AND (user_id = 2))"
" -> Materialize (cost=7449.20..16619.27 rows=58836 width=32) (actual time=947.430..4277.029 rows=99923 loops=1)"
" -> Nested Loop Left Join (cost=7449.20..16472.18 rows=58836 width=32) (actual time=947.424..3872.057 rows=99923 loops=1)"
" Join Filter: (kc.id = kv.channel_id)"
" -> Index Scan using index_channels_on_domain_id on channels kc (cost=0.00..12.33 rows=5 width=16) (actual time=0.008..0.090 rows=5 loops=1)"
" -> Materialize (cost=7449.20..10814.25 rows=58836 width=24) (actual time=189.470..536.745 rows=99923 loops=5)"
" -> Hash Right Join (cost=7449.20..10175.07 rows=58836 width=24) (actual time=947.296..1446.256 rows=99923 loops=1)"
" Hash Cond: (ks.product_id = kp.id)"
" -> Seq Scan on sales ks (cost=0.00..1082.22 rows=59022 width=8) (actual time=0.027..119.767 rows=59022 loops=1)"
" -> Hash (cost=6368.75..6368.75 rows=58836 width=20) (actual time=947.213..947.213 rows=58836 loops=1)"
" Buckets: 2048 Batches: 4 Memory Usage: 808kB"
" -> Hash Left Join (cost=3151.22..6368.75 rows=58836 width=20) (actual time=376.685..817.777 rows=58836 loops=1)"
" Hash Cond: (kv.product_id = kp.id)"
" -> Seq Scan on visits kv (cost=0.00..1079.36 rows=58836 width=20) (actual time=0.011..135.584 rows=58836 loops=1)"
" -> Hash (cost=1704.43..1704.43 rows=88143 width=4) (actual time=376.537..376.537 rows=88143 loops=1)"
" Buckets: 4096 Batches: 4 Memory Usage: 785kB"
" -> Seq Scan on products kp (cost=0.00..1704.43 rows=88143 width=4) (actual time=0.006..187.174 rows=88143 loops=1)"
"Total runtime: 5096.723 ms"
答案 0 :(得分:3)
100万条记录不是很多,加入5个表对数据库来说是一项简单的任务。索引很好,但它们有用吗? EXPLAIN ANALYZE告诉您有关查询的内容是什么?配置怎么样?默认配置刚刚开始,它不是为所有类型的工作负载提供最佳性能的设置。
但是不要担心几个连接,关系数据库已经习惯了。