我有一个非标准化的数据库,有大约4亿行,我已经将所有重复数据移动到一个新的规范化数据库中,所以我可以用id表示它。
现在我需要移动所有数据并将其转换为包含id的条目。
问题是400米行需要一段时间......我需要帮助来优化。
此查询每行需要0.4秒,因此需要几个月的时间:
INSERT IGNORE INTO normalized.entry (insDate, `date`, project, keyword, url, position, competition, serachEngine)
SELECT
CURDATE() as insDate
, d.id as dateId
, p.id as projectId
, k.id as keywordId
, z.id AS urlId
, old.position
, old.competition
, s.id as searchEngineId
FROM unnormalized.bigtable old
INNER JOIN normalized.`date` d ON old.insDate = d.`date`
INNER JOIN normalized.project p ON old.awrProject = p.project
INNER JOIN normalized.searchEngine s ON old.searchEngine = s.searchEngine
INNER JOIN normalized.keyword k ON old.keyword = k.keyword
INNER JOIN normalized.urlHash z ON old.url = z.url
WHERE old.id >= ".$start." AND old.id <= ".$stop."";
如果我使用更多的PHP并将它分成两个查询,这样做每个条目只需要0.07秒,但仍然需要几个月:
$q = "SELECT tmp.id
, d.id as dateId
, p.id as projectId
, k.id as keywordId
, tmp.position
, tmp.competition
, s.id as searchEngineId
, tmp.url
, z.id AS urlId
FROM unnormalized.bigtable tmp
INNER JOIN normalized.`date` d ON tmp.insDate = d.`date`
INNER JOIN normalized.project p ON tmp.awrProject = p.project
INNER JOIN normalized.searchEngine s ON tmp.searchEngine = s.searchEngine
INNER JOIN normalized.keyword k ON tmp.keyword = k.keyword
INNER JOIN normalized.urlHash z ON tmp.url = z.url
WHERE tmp.id > ".$start." AND tmp.id < ".$stop."";
// echo $q;
$result = mysql_query($q, $local);
if (mysql_num_rows($result) > 0) {
while ($row = mysql_fetch_array($result)) {
$q = "SELECT id FROM normalized.url WHERE url = '".$row["url"]."'";
$resultUrl = mysql_query($q, $local);
$rowUrl = mysql_fetch_array($resultUrl);
$q = "INSERT IGNORE normalized.entry (insDate, `date`, project, keyword, url, position, competition, serachEngine) VALUES (NOW(), '".$row["dateId"]."', '".$row["projectId"]."', '".$row["keywordId"]."', '".$rowUrl["id"]."', '".$row["position"]."', '".$row["competition"]."', '".$row["searchEngineId"]."')";
我不知道如果不花半年时间我将如何移植这些数据! /所有需要的帮助
规范:我在RDS亚马逊服务器上使用InnoDB。
编辑:第一个查询的EXPLAIN SELECT:
ID,SELECT_TYPE,表,式,possible_keys,键,key_len,ref时,行,备用 1,SIMPLE,p,index,NULL,projectName,42,NULL,1346,“使用索引” 1,SIMPLE,s,index,NULL,searchEngine,42,NULL,2336,“使用索引;使用连接缓冲区” 1,SIMPLE,k,index,NULL,keyword,42,NULL,128567,“使用索引;使用连接缓冲区” 1,SIMPLE,tmp,ref,“keyword_url_insDate,keyword,searchEngine,url,awrProject”,keyword_url_insDate,767,func,115,“Using where” 1,SIMPLE,d,eq_ref,date,date,3,intradb.tmp.insDate,1,“using where; using index” 1,SIMPLE,z,ref,url,url,767,bbointradb.tmp.url,1,“Using index”
SHOW CREATE TABLE:
'rankingUrls201001','CREATE TABLE rankingUrls201001
(
id
int(11)NOT NULL AUTO_INCREMENT,
insDate
datetime NOT NULL,
keyword
varchar(255)COLLATE utf8_swedish_ci NOT NULL,
searchEngine
varchar(25)COLLATE utf8_swedish_ci NOT NULL,
url
varchar(255)COLLATE utf8_swedish_ci NOT NULL,
position
int(11)NOT NULL,
competition
varchar(20)COLLATE utf8_swedish_ci NOT NULL,
awrProject
varchar(200)COLLATE utf8_swedish_ci NOT NULL,
server
varchar(20)COLLATE utf8_swedish_ci NOT NULL,
rank
varchar(40)COLLATE utf8_swedish_ci NOT NULL,
PRIMARY KEY(id
),
KEY keyword_url_insDate
(keyword
,url
,insDate
),
KEY keyword
(keyword
),
KEY searchEngine
(searchEngine
),
KEY url
(url
),
KEY awrProject
(awrProject
)
)ENGINE = InnoDB AUTO_INCREMENT = 2266575 DEFAULT CHARSET = utf8 COLLATE = utf8_swedish_ci'
答案 0 :(得分:0)
@Robin,为了加快数据传输过程,您应该采取适当的步骤以确保所有处理都在内存中完成,并且您应该尽可能避免对磁盘进行任何写入。 如果执行以下步骤,则应减少文件传输处理时间:
首先,配置您的数据库实例以尽可能多地分配临时工作空间。
其次,将流程拆分为多步流程,这样每个处理阶段都将以最少的WRITE到磁盘活动完成。
最后,禁用二进制日志;这将减少WRITE到磁盘的处理,在许多情况下可以将查询响应时间缩短一半。
这些步骤可以加快流程,但考虑到您拥有的数据量,仍然可能需要一定的时间来传输和处理。