我们使用MySql作为我们的数据库
以下查询在mysql表上运行(大约2500万条记录)。我在这里粘贴了两个查询。查询运行得太慢了,我想知道更好的复合索引是否可以改善这种情况。
关于什么是最佳综合指数的任何想法?
并建议我这些查询需要复合索引
第一次查询
EXPLAIN SELECT log_type,
count(DISTINCT subscriber_id) AS distinct_count,
count(*) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx'
AND campaign_id='12345'
AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED')
AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00')
AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00')
GROUP BY log_type
以上查询的解释
+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+
| 1 | SIMPLE | campaign_logs | index_merge | campaign_id_index,domain_index,log_type_index,log_time_index | campaign_id_index,domain_index | 153,153 | NULL | 35683 | Using intersect(campaign_id_index,domain_index); Using where; Using filesort |
+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+
第二次查询
SELECT campaign_id
, subscriber_id
, campaign_name
, log_time
, log_type
, message
, UNIX_TIMESTAMP(log_time) AS time
FROM campaign_logs
WHERE domain = 'xxx'
AND log_type = 'EMAIL_OPENED'
ORDER
BY log_time DESC
LIMIT 20;
以上查询的解释
+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+
| 1 | SIMPLE | campaign_logs | index_merge | domain_index,log_type_index | domain_index,log_type_index | 153,153 | NULL | 118392 | Using intersect(domain_index,log_type_index); Using where; Using filesort |
+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+
第三次搜寻
EXPLAIN SELECT *, UNIX_TIMESTAMP(log_time) AS time FROM stats.campaign_logs WHERE domain = 'xxx' AND log_type <> 'EMAIL_SLEEP' AND subscriber_id = '123' ORDER BY log_time DESC LIMIT 100
以上查询的解释
+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+
| 1 | SIMPLE | campaign_logs | ref | subscriber_id_index,domain_index,log_type_index | subscriber_id_index | 153 | const | 35 | Using where; Using filesort |
+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+
如果您需要我可以提供的任何其他详细信息
更新(2016年4月/ 22日): 现在我们要在现有表中添加一个列,即节点ID。一个广告系列可以有多个节点。无论我们在广告系列上生成哪些报告,我们现在都需要在各个节点上生成这些报告
例如
SELECT log_type,
count(DISTINCT subscriber_id) AS distinct_count,
count(*) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx',
AND campaign_id='12345',
AND node_id = '34567',
AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED')
AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00')
AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00')
GROUP BY log_type
CREATE TABLE `camp_logs` (
`domain` varchar(50) DEFAULT NULL,
`campaign_id` varchar(50) DEFAULT NULL,
`subscriber_id` varchar(50) DEFAULT NULL,
`message` varchar(21000) DEFAULT NULL,
`log_time` datetime DEFAULT NULL,
`log_type` varchar(50) DEFAULT NULL,
`level` varchar(50) DEFAULT NULL,
`campaign_name` varchar(500) DEFAULT NULL,
KEY `subscriber_id_index` (`subscriber_id`),
KEY `log_type_index` (`log_type`),
KEY `log_time_index` (`log_time`),
KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`),
KEY `domain_logtype_logtime_index` (`domain`,`log_type`,`log_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
SIZE问题。
由于我们有两个复合索引,索引文件迅速增加。以下是表当前统计数据。 数据大小:30 GB 索引大小:35 GB
对于node_id的报告,我们想要更新现有的复合索引
来自 的
到 您是否可以为广告系列和节点级报告建议合适的综合索引。 由于KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`),
KEY `campid_domain_logtype_logtime_subid_nodeid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`,`node_id`)
答案 0 :(得分:2)
这是您的第一个查询:
SELECT A.log_type, count(*) as distinct_count, sum(A.total_count) as total_count
from (SELECT log_type, count(subscriber_id) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx' AND campaign_id = '12345' AND
log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND
DATE(CONVERT_TZ(log_time,'+00:00','+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58')
GROUP BY subscriber_id,log_type) A
GROUP BY A.log_type;
最好写成:
SELECT log_type, count(DISTINCT subscriber_id) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx' AND campaign_id = '12345' AND
log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND
DATE(CONVERT_TZ(log_time, '+00:00', '+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58')
GROUP BY log_type;
最佳指数可能是:campaign_logs(domain, campaign_id, log_type, log_time, subscriber_id)
。这是查询的覆盖索引。前三个密钥应该用于where
过滤。
答案 1 :(得分:0)
重写第一个查询,如下所示:
SELECT log_type,
count(DISTINCT subscriber_id) AS distinct_count,
count(*) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx'
AND campaign_id='12345'
AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED')
AND DATE(CONVERT_TZ(log_time,'+00:00','+05:30'))
BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58')
GROUP BY log_type
它应该产生相同的结果,但它没有内部查询和单个GROUP BY
。
该表已经包含了所需的所有索引。
最后一个条件(DATE(...)
)不能使用任何索引,因为它必须使用log_time
为每行计算一个值。重写它以将log_time
的裸值与某些计算值进行比较(将CONVERT_TZ()
应用于执行逆转换的间隔范围)。
这样,它使用索引的全部功能将索引列log_time
与某些常量值进行比较:
AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00')
AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00')
列domain
和log_type
上的多列索引(按此顺序)可以帮助加快查询速度(它们在Using intersect(domain_index,log_type_index)
列中都有Extra
返回EXPLAIN
)。
如果您创建了这样的索引,则删除索引domain_index
。列domain
和log_type
的索引(按此顺序)也可以仅用作domain
的索引。 MySQL可以使用它而不是domain_index
。具有两个相同的索引会使写入操作变慢并且没有任何好处。
答案 2 :(得分:0)
对于查询1,@ Gordon Linoff的索引非常好(至少在重写SELECT之后):
INDEX(domain, campaign_id, log_type, log_time, subscriber_id)
INDEX(campaign_id, domain, log_type, log_time, subscriber_id) -- equally good.
对于查询2:“index_merge”表示您可能从“复合索引”中受益。第二个查询最好通过以下任一方式处理,(我认为)将计算结果集,只有20个读取,而不是118K,由EXPLAIN估计。
INDEX(domain, log_type, log_time)
INDEX(log_type, domain, log_time)
请记住,当您添加索引时,您应该摆脱冗余索引。例如,INDEX(domain, ...)
使KEY domain_index (domain)
变为冗余,因此后者可以是DROPped。
总的来说,我会推荐
DROP INDEX(campaign_id_index),
ADD INDEX(campaign_id, domain, log_type, log_time, subscriber_id),
DROP INDEX(domain),
ADD INDEX(domain, log_type, log_time)
PRIMARY KEY(id, log_time) -- if you also add PARTITIONing; see below
其他建议:
InnoDB必须有一个PRIMARY KEY。 (为您提供了一个6字节的隐藏文件。)建议ADD COLUMN id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
。
考虑将log_type从庞大的VARCHAR更改为ENUM。
答案 3 :(得分:0)
对于 3 查询,我看到很少共享索引。相反,我会投票给这些索引(假设你添加id ... AUTO_INCREMENT
)
PRIMARY KEY(id)
INDEX(campaign_id, domain, log_time)
INDEX(subscriber_id, domain)
INDEX(domain, log_type, log_time)
INDEX(log_time)
您仍应考虑其余建议(ENUM,INT等)。这些将缩小数据和索引的磁盘占用空间。较小 - &gt;更多可缓存 - &gt;少I / O - &gt;更快。
INDEX(log_time)
不一定会在任何查询中使用,但我保留了它,以防优化程序决定定位ORDER BY
而不是WHERE
。我没有足够的信息来预测;此外,我怀疑优化器可能会选择一次索引,另一次索引。
3个“复合”索引实际上可以按任意顺序排列前两列。我选择混合使用,以便第一列在它们之间有所不同,从而可能有助于查询#4。
这个答案更像是“艺术”而非“科学”;我认为它会和它一样好。