我想在php中制作一个简单的抓取工具,让我可以获取网页中的链接,回显他们的网址,并抓取到其他网页,以便在某个域下执行相同操作。在这里使用cURL会有必要吗?另外..如何指定爬行器的深度。
到目前为止,我有这个:
$dom = new DOMDocument;
$dom->loadHTML($html);
foreach( $dom->getElementsByTagName('a') as $node ) {
echo $dom->saveXml($node), PHP_EOL;
}
答案 0 :(得分:2)
查看Snoopy,一个围绕卷曲的简单包装。下面是一些示例代码
/*
You need the snoopy.class.php from
http://snoopy.sourceforge.net/
*/
include("snoopy.class.php");
$snoopy = new Snoopy;
// need an proxy?:
//$snoopy->proxy_host = "my.proxy.host";
//$snoopy->proxy_port = "8080";
// set browser and referer:
$snoopy->agent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)";
$snoopy->referer = "http://www.jonasjohn.de/";
// set some cookies:
$snoopy->cookies["SessionID"] = '238472834723489';
$snoopy->cookies["favoriteColor"] = "blue";
// set an raw-header:
$snoopy->rawheaders["Pragma"] = "no-cache";
// set some internal variables:
$snoopy->maxredirs = 2;
$snoopy->offsiteok = false;
$snoopy->expandlinks = false;
// set username and password (optional)
//$snoopy->user = "joe";
//$snoopy->pass = "bloe";
// fetch the text of the website www.google.com:
if($snoopy->fetchtext("http://www.google.com")){
// other methods: fetch, fetchform, fetchlinks, submittext and submitlinks
// response code:
print "response code: ".$snoopy->response_code."<br/>\n";
// print the headers:
print "<b>Headers:</b><br/>";
while(list($key,$val) = each($snoopy->headers)){
print $key.": ".$val."<br/>\n";
}
print "<br/>\n";
// print the texts of the website:
print "<pre>".htmlspecialchars($snoopy->results)."</pre>\n";
}
else {
print "Snoopy: error while fetching document: ".$snoopy->error."\n";
}
您需要使用“fetchlinks”来获取链接。