我有一个函数,它将url的数组作为输入。我已经验证了网址是正确的,我可以完美地遍历它们。我还使用curl_getinfo验证了curl正在下载正确的页面。但是,curl(html)的输出对于每个页面都是相同的。这是我的代码:
$urls = array();
$urls = getpages($mainpage);
print_r($urls);
foreach($urls as $link) {
echo $link. '<br><br><br>';
$circdl = my_curl($link);
echo $circdl. '<br><br><br>';
$circdl = NULL;
}
输出的url数组如下:
Array ( [0] => http://www.site.com/savings/viewcircular?promotionId=81498&sneakpeek=¤tPageNumber=1 [1] => http://www.site.com/savings/viewcircular?promotionId=81498&sneakpeek=¤tPageNumber=2
$ link也与curl_getinfo中的curl一样输出。我通过这个循环运行了另一个url数组,它们工作正常,但我怀疑这里的问题是url(&符号)的格式。我真的很难过为什么这些页面没有按预期下载。
这是my_curl函数:
function my_curl($url)
{
$timeout=10;
$error_report=TRUE;
$curl = curl_init();
$cookiepath = drupal_get_path('module','mymodule'). '/cookies.txt';
// HEADERS AND OPTIONS APPEAR TO BE A FIREFOX BROWSER REFERRED BY GOOGLE
$header[] = "Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
$header[] = "Cache-Control: max-age=0";
$header[] = "Connection: keep-alive";
$header[] = "Keep-Alive: 300";
$header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header[] = "Accept-Language: en-us,en;q=0.5";
$header[] = "Pragma: "; // BROWSERS USUALLY LEAVE BLANK
// SET THE CURL OPTIONS - SEE http://php.net/manual/en/function.curl-setopt.php
curl_setopt( $curl, CURLOPT_URL, $url );
curl_setopt( $curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6' );
curl_setopt( $curl, CURLOPT_HTTPHEADER, $header );
curl_setopt( $curl, CURLOPT_REFERER, 'http://www.google.com' );
curl_setopt( $curl, CURLOPT_ENCODING, 'gzip,deflate' );
curl_setopt( $curl, CURLOPT_AUTOREFERER, TRUE );
curl_setopt( $curl, CURLOPT_RETURNTRANSFER, TRUE );
curl_setopt( $curl, CURLOPT_FOLLOWLOCATION, TRUE );
curl_setopt( $curl, CURLOPT_COOKIEFILE, $cookiepath );
curl_setopt( $curl, CURLOPT_COOKIEJAR, $cookiepath );
curl_setopt( $curl, CURLOPT_TIMEOUT, $timeout );
// RUN THE CURL REQUEST AND GET THE RESULTS
$htm = curl_exec($curl);
// Check for page request
//$info = curl_getinfo($curl);
//echo 'Took ' . $info['total_time'] . ' seconds to send a request to ' . $info['url'];
// ON FAILURE HANDLE ERROR MESSAGE
if ($htm === FALSE)
{
if ($error_report)
{
$err = curl_errno($curl);
$inf = curl_getinfo($curl);
echo "CURL FAIL: $url TIMEOUT=$timeout, CURL_ERRNO=$err";
var_dump($inf);
}
curl_close($curl);
return FALSE;
}
// ON SUCCESS RETURN XML / HTML STRING
curl_close($curl);
return $htm;
}
有趣的是,如果我运行它:
echo my_curl('http://www.site.com/savings/viewcircular?promotionId=81498&sneakpeek=¤tPageNumber=2')
输出正确!! ?? :(
感谢您的帮助!
答案 0 :(得分:0)
我发现问题在于将url的编码传递给我的函数。我错误地剥离了编码并在网址上添加了“人类可读”的结尾。这导致主机无法正确识别页面。我如何解决这个问题是为了忽略我更好的判断并且不考虑编码。现在传递数组时,页面正确加载。感谢所有给我这看的人。这让我很难过!
以下是我的解释代码片段:
function getpages($url) {
global $host;
$circdl = my_curl($url);
$circqp = htmlqp($circdl,'body');
//Extract last page number
$lastpagenumber = $circqp->branch()->find('li[class="last-page"]')->text();
$lastpagenumberurl = $circqp->branch()->find('li[class="last-page"]')->children('a')->attr('href');
//Extract page link root
$pagelinkroot = substr_replace($lastpagenumberurl,"",-2);
$currentpage = "=";
$lpn = intval($lastpagenumber);
//Move through the remaining pages
$pagelinks = array();
for ($i = 1; $i <= $lpn; ++$i) {
$pagelinks[] = join(array($host,$pagelinkroot,$currentpage,$i));
}
return $pagelinks;
}
Substr_replace用于规定ecoding。我把它从20改为2,只是将结束剥离,然后在循环后通过链接附加它。