所以我一直在编写脚本,而我写的几个似乎经常运行而没有超时。然而,这个是,我想知道你们是否有可能建议优化这个脚本的速度。
它的工作是从现有数据库中获取ID,并在API查找中使用它们来提取项目数据。问题是我需要查看45000个项目,我已经通过curl使用多个句柄来管理流程。
我有点陷入困境,并且想知道你们是否有一些想法让这个脚本运行得更快以避免超时。
注意:我的数据库连接信息被隐藏但我连接正常
<?php
$s = microtime(true); //Time request variable
//CONNECT TO THE DATABASE
$DB_NAME = 'database';
$DB_HOST = 'mysql.myhost.com';
$DB_USER = 'myusername';
$DB_PASS = 'mypass';
$con = new mysqli($DB_HOST, $DB_USER, $DB_PASS, $DB_NAME);
if (mysqli_connect_errno()) {
printf("Connect failed: %s\n", mysqli_connect_error());
exit();
}
//END OF DB CONNECT
//TP UTIL
function array_2d_to_1d($input_array) {
$output_array = array();
for ($i = 0; $i < count($input_array); $i++) {
for ($j = 0; $j < count($input_array[$i]); $j++) {
$output_array[] = $input_array[$i][$j];
}
}
return $output_array;
}
function tableExists($con, $table) {
$show = "SHOW TABLES LIKE '$table'";
$result = $con->query($show) or die($con->error.__LINE__);
if($result->num_rows == 1) return true;
else return false;
}
//END TP UTIL
//++++++++++GET ITEM IDS++++++++++++//
$table = "shinies_primitiveitems_table_NEW";
if(tableExists($con, $table)){
$query = "SELECT * FROM $table";
$result = $con->query($query) or die($con->error.__LINE__);
$index = 0;
if($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
$urls[$index] = "https://api.guildwars2.com/v2/items/".stripslashes($row['ItemID']);
//echo $urls[$i]."<br />";
$index++;
} //end while loop
} //end if
}
//++++++++++END GET ITEM IDS++++++++++++//
//++++++++++MULTI CURL REQUESTS FOR API+++++++++++//
// Define the URLs
//$urls = $apiURLArray;
// Create get requests for each URL
$mh = curl_multi_init();
foreach($urls as $i => $url)
{
//echo $url."<br />";
$ch[$i] = curl_init($url);
curl_setopt($ch[$i], CURLOPT_RETURNTRANSFER, 1);
curl_multi_add_handle($mh, $ch[$i]);
}
// Start performing the request
do {
$execReturnValue = curl_multi_exec($mh, $runningHandles);
} while ($execReturnValue == CURLM_CALL_MULTI_PERFORM);
// Loop and continue processing the request
while ($runningHandles && $execReturnValue == CURLM_OK) {
// Wait forever for network
$numberReady = curl_multi_select($mh);
if ($numberReady != -1) {
// Pull in any new data, or at least handle timeouts
do {
$execReturnValue = curl_multi_exec($mh, $runningHandles);
} while ($execReturnValue == CURLM_CALL_MULTI_PERFORM);
}
}
// Check for any errors
if ($execReturnValue != CURLM_OK) {
trigger_error("Curl multi read error $execReturnValue\n", E_USER_WARNING);
}
// Extract the content
foreach($urls as $i => $url)
{
// Check for errors
$curlError = curl_error($ch[$i]);
if($curlError == "") {
$res[$i] = curl_multi_getcontent($ch[$i]);
} else {
print "Curl error on handle $i: $curlError\n";
}
// Remove and close the handle
curl_multi_remove_handle($mh, $ch[$i]);
curl_close($ch[$i]);
}
// Clean up the curl_multi handle
curl_multi_close($mh);
//var_dump(json_decode($res, true));
//echo count($res)."<br />";
//Decode data
for($i=0;$i<count($res);$i++){
$dataArray[$i] = json_decode($res[$i], true);
//echo "$i: ".json_decode($res[$i], true)."<br /><br />";
}
//echo count($dataArray)."<br />";
//var_dump($dataArray);
//$data = array_2d_to_1d($dataArray);
//echo count($data)."<br />";
/*
//Find attributes of each item
for($i=0;$i<count($data);$i++){
echo $data[$i]['name']." - ".$data[$i]['icon'];
}*/
//turn dataArray into a single dimensional data array
//$data = array_2d_to_1d($dataArray);
//print_r($data);
//++++++++++END REQUEST+++++++++++//
// Print the response data - DEBUG
echo "<p>Total request time: ".round(microtime(true) - $s, 4)." seconds.";
?>
答案 0 :(得分:0)
脚本是否因最大执行时间限制而超时?如果是这样,也许您可以使用http://php.net/manual/en/function.set-time-limit.php或其他东西将超时设置为更大的值?
如果它的所有卷曲项目停止返回的时间超时,也许远程端会对请求进行速率限制,或者如果您有&#还有其他限制39;重新尝试同时启动45,000个TCP请求?