我正在使用InnoDB。
QUERY,EXPLAIN&指标
SELECT stories.*, count(comments.id) AS comments, GROUP_CONCAT( DISTINCT classifications2.name SEPARATOR ';' ) AS classifications_name, GROUP_CONCAT( DISTINCT images.id ORDER BY images.position, images.id SEPARATOR ';' ) AS images_id, GROUP_CONCAT( DISTINCT images.caption ORDER BY images.position, images.id SEPARATOR ';' ) AS images_caption, GROUP_CONCAT( DISTINCT images.thumbnail ORDER BY images.position, images.id SEPARATOR ';' ) AS images_thumbnail, GROUP_CONCAT( DISTINCT images.medium ORDER BY images.position, images.id SEPARATOR ';' ) AS images_medium, GROUP_CONCAT( DISTINCT images.large ORDER BY images.position, images.id SEPARATOR ';' ) AS images_large, GROUP_CONCAT( DISTINCT users.id ORDER BY users.id SEPARATOR ';' ) AS authors_id, GROUP_CONCAT( DISTINCT users.display_name ORDER BY users.id SEPARATOR ';' ) AS authors_display_name, GROUP_CONCAT( DISTINCT users.url ORDER BY users.id SEPARATOR ';' ) AS authors_url FROM stories LEFT JOIN classifications ON stories.id = classifications.story_id LEFT JOIN classifications AS classifications2 ON stories.id = classifications2.story_id LEFT JOIN comments ON stories.id = comments.story_id LEFT JOIN image_story ON stories.id = image_story.story_id LEFT JOIN images ON images.id = image_story.`image_id` LEFT JOIN author_story ON stories.id = author_story.story_id LEFT JOIN users ON users.id = author_story.author_id WHERE classifications.`name` LIKE 'Home:Top%' AND stories.status = 1 GROUP BY stories.id ORDER BY classifications.`name`, classifications.`position` +----+-------------+------------------+--------+---------------+----------+---------+------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+--------+---------------+----------+---------+------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | stories | ref | status | status | 1 | const | 434792 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | classifications | ref | story_id | story_id | 4 | stories.id | 1 | Using where | | 1 | SIMPLE | classifications2 | ref | story_id | story_id | 4 | stories.id | 1 | Using where | | 1 | SIMPLE | comments | ref | story_id | story_id | 8 | stories.id | 6 | Using where; Using index | | 1 | SIMPLE | image_story | ref | story_id | story_id | 4 | stories.id | 1 | NULL | | 1 | SIMPLE | images | eq_ref | PRIMARY | PRIMARY | 4 | image_story.image_id | 1 | NULL | | 1 | SIMPLE | author_story | ref | story_id | story_id | 4 | stories.id | 1 | Using where | | 1 | SIMPLE | users | eq_ref | PRIMARY | PRIMARY | 4 | author_story.author_id | 1 | Using where | +----+-------------+------------------+--------+---------------+----------+---------+------------------------+--------+----------------------------------------------+ +-----------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | +-----------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+ | stories | 0 | PRIMARY | 1 | id | A | 869584 | NULL | NULL | | BTREE | | stories | 1 | created_at | 1 | created_at | A | 434792 | NULL | NULL | | BTREE | | stories | 1 | source | 1 | source | A | 2 | NULL | NULL | YES | BTREE | | stories | 1 | source_id | 1 | source_id | A | 869584 | NULL | NULL | YES | BTREE | | stories | 1 | type | 1 | type | A | 2 | NULL | NULL | | BTREE | | stories | 1 | status | 1 | status | A | 2 | NULL | NULL | | BTREE | | stories | 1 | type_status | 1 | type | A | 2 | NULL | NULL | | BTREE | | stories | 1 | type_status | 2 | status | A | 2 | NULL | NULL | | BTREE | | classifications | 0 | PRIMARY | 1 | id | A | 207 | NULL | NULL | | BTREE | | classifications | 1 | story_id | 1 | story_id | A | 207 | NULL | NULL | | BTREE | | classifications | 1 | name | 1 | name | A | 103 | NULL | NULL | | BTREE | | classifications | 1 | name | 2 | position | A | 207 | NULL | NULL | YES | BTREE | | comments | 0 | PRIMARY | 1 | id | A | 239336 | NULL | NULL | | BTREE | | comments | 1 | status | 1 | status | A | 2 | NULL | NULL | | BTREE | | comments | 1 | date | 1 | date | A | 239336 | NULL | NULL | | BTREE | | comments | 1 | story_id | 1 | story_id | A | 39889 | NULL | NULL | | BTREE | +-----------------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
QUERY TIMES
平均0.035 seconds
运行。
如果我只删除GROUP BY
,则时间平均降至0.007
。
如果我只删除stories.status=1
过滤器,则时间平均降至0.025
。这个似乎很容易优化。
如果我只删除LIKE
过滤器和ORDER BY
子句,则时间平均降至0.006
。
我的理解改善了各种答案。
我向author_story
和images_story
添加了索引,这似乎改进了0.025
秒的查询,但由于某些奇怪的原因,EXPLAIN
计划看起来好多了。此时,将ORDER BY
丢弃查询删除至0.015
秒,同时删除ORDER BY
和GROUP BY
可将查询效果提高至0.006
。我现在关注的是这两件事吗?如果需要,我可以将ORDER BY
移动到app逻辑中。
以下是修订后的EXPLAIN
和INDEXES
+----+-------------+------------------+--------+---------------------------------+----------+---------+--------------------------+------+--------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+--------+---------------------------------+----------+---------+--------------------------+------+--------------------------------------------------------+ | 1 | SIMPLE | classifications | range | story_id,name | name | 102 | NULL | 14 | Using index condition; Using temporary; Using filesort | | 1 | SIMPLE | stories | eq_ref | PRIMARY,status | PRIMARY | 4 | classifications.story_id | 1 | Using where | | 1 | SIMPLE | classifications2 | ref | story_id | story_id | 4 | stories.id | 1 | Using where | | 1 | SIMPLE | author_story | ref | author_id,story_id,author_story | story_id | 4 | stories.id | 1 | Using index condition | | 1 | SIMPLE | users | eq_ref | PRIMARY | PRIMARY | 4 | author_story.author_id | 1 | Using where | | 1 | SIMPLE | comments | ref | story_id | story_id | 8 | stories.id | 8 | Using where; Using index | | 1 | SIMPLE | image_story | ref | story_id,story_id_2 | story_id | 4 | stories.id | 1 | NULL | | 1 | SIMPLE | images | eq_ref | PRIMARY,position_id | PRIMARY | 4 | image_story.image_id | 1 | NULL | +----+-------------+------------------+--------+---------------------------------+----------+---------+--------------------------+------+--------------------------------------------------------+ +-----------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | author_story | 0 | PRIMARY | 1 | id | A | 220116 | NULL | NULL | | BTREE | | | | author_story | 0 | story_author | 1 | story_id | A | 220116 | NULL | NULL | | BTREE | | | | author_story | 0 | story_author | 2 | author_id | A | 220116 | NULL | NULL | | BTREE | | | | author_story | 1 | author_id | 1 | author_id | A | 2179 | NULL | NULL | | BTREE | | | | author_story | 1 | story_id | 1 | story_id | A | 220116 | NULL | NULL | | BTREE | | | | image_story | 0 | PRIMARY | 1 | id | A | 148902 | NULL | NULL | | BTREE | | | | image_story | 0 | story_image | 1 | story_id | A | 148902 | NULL | NULL | | BTREE | | | | image_story | 0 | story_image | 2 | image_id | A | 148902 | NULL | NULL | | BTREE | | | | image_story | 1 | story_id | 1 | story_id | A | 148902 | NULL | NULL | | BTREE | | | | image_story | 1 | image_id | 1 | image_id | A | 148902 | NULL | NULL | | BTREE | | | | classifications | 0 | PRIMARY | 1 | id | A | 257 | NULL | NULL | | BTREE | | | | classifications | 1 | story_id | 1 | story_id | A | 257 | NULL | NULL | | BTREE | | | | classifications | 1 | name | 1 | name | A | 128 | NULL | NULL | | BTREE | | | | classifications | 1 | name | 2 | position | A | 257 | NULL | NULL | YES | BTREE | | | | stories | 0 | PRIMARY | 1 | id | A | 962570 | NULL | NULL | | BTREE | | | | stories | 1 | created_at | 1 | created_at | A | 481285 | NULL | NULL | | BTREE | | | | stories | 1 | source | 1 | source | A | 4 | NULL | NULL | YES | BTREE | | | | stories | 1 | source_id | 1 | source_id | A | 962570 | NULL | NULL | YES | BTREE | | | | stories | 1 | type | 1 | type | A | 2 | NULL | NULL | | BTREE | | | | stories | 1 | status | 1 | status | A | 4 | NULL | NULL | | BTREE | | | | stories | 1 | type_status | 1 | type | A | 2 | NULL | NULL | | BTREE | | | | stories | 1 | type_status | 2 | status | A | 6 | NULL | NULL | | BTREE | | | | comments | 0 | PRIMARY | 1 | id | A | 232559 | NULL | NULL | | BTREE | | | | comments | 1 | status | 1 | status | A | 6 | NULL | NULL | | BTREE | | | | comments | 1 | date | 1 | date | A | 232559 | NULL | NULL | | BTREE | | | | comments | 1 | story_id | 1 | story_id | A | 29069 | NULL | NULL | | BTREE | | | | images | 0 | PRIMARY | 1 | id | A | 147206 | NULL | NULL | | BTREE | | | | images | 0 | source_id | 1 | source_id | A | 147206 | NULL | NULL | YES | BTREE | | | | images | 1 | position | 1 | position | A | 4 | NULL | NULL | | BTREE | | | | images | 1 | position_id | 1 | id | A | 147206 | NULL | NULL | | BTREE | | | | images | 1 | position_id | 2 | position | A | 147206 | NULL | NULL | | BTREE | | | | users | 0 | PRIMARY | 1 | id | A | 981 | NULL | NULL | | BTREE | | | | users | 0 | users_email_unique | 1 | email | A | 981 | NULL | NULL | | BTREE | | | +-----------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ SELECT stories.*, count(comments.id) AS comments, GROUP_CONCAT(DISTINCT users.id ORDER BY users.id SEPARATOR ';') AS authors_id, GROUP_CONCAT(DISTINCT users.display_name ORDER BY users.id SEPARATOR ';') AS authors_display_name, GROUP_CONCAT(DISTINCT users.url ORDER BY users.id SEPARATOR ';') AS authors_url, GROUP_CONCAT(DISTINCT classifications2.name SEPARATOR ';') AS classifications_name, GROUP_CONCAT(DISTINCT images.id ORDER BY images.position,images.id SEPARATOR ';') AS images_id, GROUP_CONCAT(DISTINCT images.caption ORDER BY images.position,images.id SEPARATOR ';') AS images_caption, GROUP_CONCAT(DISTINCT images.thumbnail ORDER BY images.position,images.id SEPARATOR ';') AS images_thumbnail, GROUP_CONCAT(DISTINCT images.medium ORDER BY images.position,images.id SEPARATOR ';') AS images_medium, GROUP_CONCAT(DISTINCT images.large ORDER BY images.position,images.id SEPARATOR ';') AS images_large FROM classifications INNER JOIN stories ON stories.id = classifications.story_id LEFT JOIN classifications AS classifications2 ON stories.id = classifications2.story_id LEFT JOIN comments ON stories.id = comments.story_id LEFT JOIN image_story ON stories.id = image_story.story_id LEFT JOIN images ON images.id = image_story.`image_id` INNER JOIN author_story ON stories.id = author_story.story_id INNER JOIN users ON users.id = author_story.author_id WHERE classifications.`name` LIKE 'Home:Top%' AND stories.status = 1 GROUP BY stories.id ORDER BY NULL
我注意到了另外一件事。如果我不选择stories.content
(LONGTEXT)和stories.content_html
(LONGTEXT),则查询会从0.015
秒降至0.006
秒。目前我正在考虑是否可以不使用content
和content_html
,或者用其他内容替换它们。
我已经更新了上面的2013-04-13更新中的查询,索引和解释,而不是在这一个中重新发布,因为它们是次要的和增量的。该查询仍在使用filesort
。我无法摆脱GROUP BY
但已摆脱ORDER BY
。
根据要求,我删除了image_story
和author_story
中的stories_id INDEXES,因为它们是多余的。结果是解释的输出仅更改为显示possible_keys
已更改。不幸的是,它仍未显示Using Index
优化。
同时将LONGTEXT
更改为TEXT
,现在正在取LEFT(stories.content, 500)
而不是stories.content
,这对查询执行时间产生了非常大的影响。
+----+-------------+------------------+--------+-----------------------------+--------------+---------+--------------------------+------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+--------+-----------------------------+--------------+---------+--------------------------+------+---------------------------------------------------------------------+ | 1 | SIMPLE | classifications | ref | story_id,name,name_position | name | 102 | const | 10 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | stories | eq_ref | PRIMARY,status | PRIMARY | 4 | classifications.story_id | 1 | Using where | | 1 | SIMPLE | classifications2 | ref | story_id | story_id | 4 | stories.id | 1 | Using where | | 1 | SIMPLE | author_story | ref | story_author | story_author | 4 | stories.id | 1 | Using where; Using index | | 1 | SIMPLE | users | eq_ref | PRIMARY | PRIMARY | 4 | author_story.author_id | 1 | Using where | | 1 | SIMPLE | comments | ref | story_id | story_id | 8 | stories.id | 8 | Using where; Using index | | 1 | SIMPLE | image_story | ref | story_image | story_image | 4 | stories.id | 1 | Using index | | 1 | SIMPLE | images | eq_ref | PRIMARY,position_id | PRIMARY | 4 | image_story.image_id | 1 | NULL | +----+-------------+------------------+--------+-----------------------------+--------------+---------+--------------------------+------+---------------------------------------------------------------------+ innodb_buffer_pool_size 134217728 TABLE_NAME INDEX_LENGTH image_story 10010624 image_story 4556800 image_story 0 TABLE_NAME INDEX_NAMES SIZE dawn/image_story story_image 13921
答案 0 :(得分:9)
我可以立即看到两个优化机会:
将OUTER JOIN更改为INNER JOIN
您的查询目前正在扫描434792个故事,您应该能够更好地缩小范围,假设不是每个故事的分类都匹配“Home:Top%”。最好使用索引来查找您正在寻找的分类,然后查找匹配的故事。
但您正在使用LEFT OUTER JOIN
进行分类,这意味着将扫描所有故事是否具有匹配的分类。然后你通过在WHERE
子句中对分类设置条件来打败它,有效地强制要求使用与LIKE
匹配的模式进行分类。所以它不再是外连接,而是内连接。
如果您首先放置分类表,并使其成为内部联接,优化程序将使用它来缩小对具有匹配分类的故事的搜索范围。
. . .
FROM
classifications
INNER JOIN stories
ON stories.id = classifications.story_id
. . .
优化器应该能够确定何时重新排序表是有利的,因此您可能不必更改查询中的顺序。但在这种情况下,您确实需要使用INNER JOIN
。
添加复合索引
您的交集表image_story和author_story没有复合索引。在多对多关系中将复合索引添加到交集表中通常是一个很大的优势,这样他们就可以执行连接并获得“使用索引”优化。
ALTER TABLE image_story ADD UNIQUE KEY imst_st_im (story_id, image_id);
ALTER TABLE author_story ADD UNIQUE KEY aust_st_au (story_id, author_id);
重新评论并更新:
我不确定您是否正确创建了新索引。您的索引转储不会显示列,并且根据更新的EXPLAIN,新索引未被使用,我希望这会发生。使用新索引会导致EXPLAIN的额外字段中的“使用索引”,这有助于提高性能。
每个表的SHOW CREATE TABLE
输出比索引转储(没有列名)更完整,如图所示。
创建索引后,您可能必须在每个表上运行一次ANALYZE TABLE。此外,多次运行查询,以确保索引位于缓冲池中。这张表是InnoDB还是MyISAM?
我还在EXPLAIN输出中注意到rows
列显示的触摸行数要少得多。这是一个改进。
你真的需要ORDER BY
吗?如果您使用ORDER BY NULL
,您应该能够摆脱“使用filesort”,这可能会提高性能。
重新更新:
您仍未从image_story
和author_story
表中获取“使用索引”优化。我有一个建议是消除冗余索引:
ALTER TABLE image_story DROP KEY story_id;
ALTER TABLE author_story DROP KEY story_id;
原因是任何可以从story_id上的单列索引中受益的查询也可以从(story_id,image_id)上的双列索引中受益。消除冗余索引有助于优化器做出更好的决策(以及节省一些空间)。这是pt-duplicate-key-checker等工具背后的理论。
我还要检查以确保您的缓冲池足够大以容纳索引。您不希望索引在查询期间进出缓冲池。
SHOW VARIABLES LIKE 'innodb_buffer_pool_size'
检查image_story表的索引大小:
SELECT TABLE_NAME, INDEX_LENGTH FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'image_story';
并将其与当前驻留在缓冲池中的索引数量进行比较:
SELECT TABLE_NAME, GROUP_CONCAT(DISTINCT INDEX_NAME) AS INDEX_NAMES, SUM(DATA_SIZE) AS SIZE
FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU
WHERE TABLE_NAME = '`test`.`image_story`' AND INDEX_NAME <> 'PRIMARY'
当然,将上面的`test`更改为您的表所属的数据库名称。
该information_schema表是MySQL 5.6中的新增功能。我假设您使用的是MySQL 5.6,因为您的EXPLAIN显示了“使用索引条件”,这在MySQL 5.6中也是新的。
我根本不使用LONGTEXT,除非我真的需要讲故事很长的字符串。请记住:
答案 1 :(得分:8)
当您使用MYSQL时,您可以利用Straight_join
STRAIGHT_JOIN forces the optimizer to join the tables in the order in which they are listed in the FROM clause. You can use this to speed up a query if the optimizer joins the tables in nonoptimal order
另一个改进范围是过滤表stories
的数据,因为您只需要状态为1的数据
因此,在表单子句中,而不是添加整个stories
表,只添加所需的记录,因为您的查询计划显示434792
表<{1}}行和{ / p>
classification
还有一个建议,您可以增加sort_buffer_size
,因为您显示为FROM
(SELECT
*
FROM
STORIES
WHERE
STORIES.status = 1) stories
LEFT JOIN
(SELECT
*
FROM
classifications
WHERE
classifications.`name` LIKE 'Home:Top%') classifications
ON stories.id = classifications.story_id
和order by
,但要小心增加缓冲区大小,因为每个会话的缓冲区大小都会增加。
此外,如果可能的话,您可以在应用程序中订购您的记录,因为您已经提到删除group by
子句改进了仅原始时间的order by
部分......
为1/6
表添加索引image_story.image_id
,为image_story
表添加author_story.story_id
索引,因为这些列用于连接
还必须在使用时创建author_story
上的索引。
我认为您几乎已经优化了查询,看到您更新...
您可以改进的另一个地方是使用BillKarwin提到的适当数据类型......
您可以将 ENUM
或 TINYINT
类型用于状态和其他没有任何增长范围的列,它将帮助您优化您的查询性能以及表的存储性能....
希望这会有所帮助......
答案 2 :(得分:6)
计算
GROUP_CONCAT(DISTINCT classifications2.name SEPARATOR ';')
可能是最耗时的操作,因为classifications
是一个大表,并且由于所有连接而使用的行数成倍增加。
所以我建议使用临时表来获取该信息。 另外,为了避免两次计算LIKE条件(一次用于临时表,一次用于&#34; real&#34;查询),我也会为此创建一个临时表。
您的原始查询,在一个非常简化的版本中(没有图像和用户表,因此它更容易阅读)是:
SELECT
stories.*,
count(DISTINCT comments.id) AS comments,
GROUP_CONCAT(DISTINCT classifications2.name ORDER BY 1 SEPARATOR ';' )
AS classifications_name
FROM
stories
LEFT JOIN classifications
ON stories.id = classifications.story_id
LEFT JOIN classifications AS classifications2
ON stories.id = classifications2.story_id
LEFT JOIN comments
ON stories.id = comments.story_id
WHERE
classifications.`name` LIKE 'Home:Top%'
AND stories.status = 1
GROUP BY stories.id
ORDER BY stories.id, classifications.`name`, classifications.`positions`;
我会用以下查询替换它,使用临时表_tmp_filtered_classifications
(名称为LIKE Home的分类ID:Top%&#39;)和_tmp_classifications_of_story
(对于每个故事ID&# 39;在_tmp_filtered_classifications
中包含&#39;所有分类名称:
DROP TABLE IF EXISTS `_tmp_filtered_classifications`;
CREATE TEMPORARY TABLE _tmp_filtered_classifications
SELECT id FROM classifications WHERE name LIKE 'Home:Top%';
DROP TABLE IF EXISTS `_tmp_classifications_of_story`;
CREATE TEMPORARY TABLE _tmp_classifications_of_story ENGINE=MEMORY
SELECT stories.id AS story_id, classifications2.name
FROM
_tmp_filtered_classifications
INNER JOIN classifications
ON _tmp_filtered_classifications.id=classifications.id
INNER JOIN stories
ON stories.id = classifications.story_id
LEFT JOIN classifications AS classifications2
ON stories.id = classifications2.story_id
GROUP BY 1,2;
SELECT
stories.*,
count(DISTINCT comments.id) AS comments,
GROUP_CONCAT(DISTINCT classifications2.name ORDER BY 1 SEPARATOR ';')
AS classifications_name
FROM
_tmp_filtered_classifications
INNER JOIN classifications
ON _tmp_filtered_classifications.id=classifications.id
INNER JOIN stories
ON stories.id = classifications.story_id
LEFT JOIN _tmp_classifications_of_story AS classifications2
ON stories.id = classifications2.story_id
LEFT JOIN comments
ON stories.id = comments.story_id
WHERE
stories.status = 1
GROUP BY stories.id
ORDER BY stories.id, classifications.`name`, classifications.`positions`;
请注意,我添加了更多&#34;顺序&#34;对查询的子句,以检查两个查询是否提供相同的结果(使用diff)。我还将count(comments.id)
更改为count(DISTINCT comments.id)
,否则查询计算的注释数量是错误的(同样,因为连接会增加行数)。
答案 3 :(得分:0)
我不知道要进行实验的数据的所有细节,但我知道您应该首先执行与最少量数据匹配的操作,从而为后续操作消除大量数据。
根据整体查询的复杂程度,您可能无法以这种方式重新排序操作。但是,您可以执行两个单独的查询,其中第一个查询只是消除了绝对不需要的数据,然后将其结果提供给第二个查询。其他人建议使用临时表,这是处理这种情况的好方法。
如果您需要澄清此策略,请与我们联系。
**更新:当每个操作与其他操作匹配大致相同百分比的数据时使用的类似策略是分别为每个操作计时,然后首先运行使用最少时间的操作。一些搜索操作比其他搜索操作更快,如果所有其他因素相同,则最快的搜索操作应该是第一个。这样,较慢的搜索操作将使用较少的数据,整体结果将是更高的性能。
答案 4 :(得分:0)
我敢打赌,LIKE
条件是您要求的最糟糕的事情
你确定你必须这样做吗?
4个步骤:
IsHomeTop
表格classifications
bool索引列
UPDATE classifications SET IsTopHome = 1 WHERE NAME LIKE 'Home:Top%'
WHERE classifications.IsTopHome == 1
您的查询对于让LIKE
运算符降低您的效果非常重要
如果stories
经常更新,我不认为你的classifications
表就是这种情况。所以给你一个机会并根除LIKE
运营商。
答案 5 :(得分:0)
您可以在这里尝试一些方法:
1) create covering index on classifications.`name`
您可以通过创建覆盖索引来加速查询。
覆盖索引是指在查询中选择的所有字段都被索引覆盖的情况,在这种情况下,InnoDB(不是MyISAM)将永远不会读取表中的数据,而只会使用索引中的数据加快选择。
CREATE TABLE classifications
(
KEY class_name
(name
,...所有列)
)ENGINE = InnoDB DEFAULT CHARSET = utf8;
2)而不是classifications.name LIKE 'Home:Top%'
使用locate('Home:Top',classifications.name)