如何将Robots.txt策略应用于我的PHP DOMDocument Web Scrapper并使用RollingCurlX一次抓取多个URL?

时间:2019-11-30 11:36:44

标签: php web-scraping web-crawler domdocument

我的基本Web爬虫使用DOMDocument和 file_get_contents 来爬网URL。 问题是它不遵守Robots.txt的政策,该政策旨在不抓取具有“禁用”模式的链接或URL。 而且,此程序非常慢,并且无法快速下载URL进行最终处理。

我希望您能帮助我

1。。首先,添加一个PHP条件以遵守 Robots.txt 策略,并且请勿下载URL 或包含 > robots.txt 文件(目录或链接已禁用)。

2。在第二个地方,修改已经与 PHP DomDocument DomElement )一起使用的代码。通过添加库https://github.com/marcushat/RollingCurlX,允许他一次并快速地下载 10,000个网址

3。。通过提取标题说明来处理下载的 10,000个URL ,就像我在代码中所做的一样。

这是我下面的Web爬网程序的PHP代码:

// This is our starting point. Change this to whatever URL you want.
$start = "";
// Our 2 global arrays containing our links to be crawled.
$already_crawled = array();
$crawling = array();
function get_details($url) {
    // The array that we pass to stream_context_create() to modify our User Agent.
    $options = array('http'=>array('method'=>"GET", 'headers'=>"User-Agent: howBot/0.1\n"));
    // Create the stream context.
    $context = stream_context_create($options);
    // Create a new instance of PHP's DOMDocument class.
    $doc = new DOMDocument();
    // Use file_get_contents() to download the page, pass the output of file_get_contents()
    // to PHP's DOMDocument class.
    @$doc->loadHTML(@file_get_contents($url, false, $context));
    // Create an array of all of the title tags.
    $title = $doc->getElementsByTagName("title");
    // There should only be one <title> on each page, so our array should have only 1 element.
    $title = $title->item(0)->nodeValue;
    // Give $description and $keywords no value initially. We do this to prevent errors.
    $description = "";
    $keywords = "";
    // Create an array of all of the pages <meta> tags. There will probably be lots of these.
    $metas = $doc->getElementsByTagName("meta");
    // Loop through all of the <meta> tags we find.
    for ($i = 0; $i < $metas->length; $i++) {
        $meta = $metas->item($i);
        // Get the description and the keywords.
        if (strtolower($meta->getAttribute("name")) == "description")
            $description = $meta->getAttribute("content");
        if (strtolower($meta->getAttribute("name")) == "keywords")
            $keywords = $meta->getAttribute("content");
    }
    // Return our JSON string containing the title, description, keywords and URL.
    return '{ "Title": "'.str_replace("\n", "", $title).'", "Description": "'.str_replace("\n", "", $description).'", "Keywords": "'.str_replace("\n", "", $keywords).'", "URL": "'.$url.'"},';
}
function follow_links($url) {
    // Give our function access to our crawl arrays.
    global $already_crawled;
    global $crawling;
    // The array that we pass to stream_context_create() to modify our User Agent.
    $options = array('http'=>array('method'=>"GET", 'headers'=>"User-Agent: howBot/0.1\n"));
    // Create the stream context.
    $context = stream_context_create($options);
    // Create a new instance of PHP's DOMDocument class.
    $doc = new DOMDocument();
    // Use file_get_contents() to download the page, pass the output of file_get_contents()
    // to PHP's DOMDocument class.
    @$doc->loadHTML(@file_get_contents($url, false, $context));
    // Create an array of all of the links we find on the page.
    $linklist = $doc->getElementsByTagName("a");
    // Loop through all of the links we find.
    foreach ($linklist as $link) {
        $l =  $link->getAttribute("href");
        // Process all of the links we find. This is covered in part 2 and part 3 of the video series.
        if (substr($l, 0, 1) == "/" && substr($l, 0, 2) != "//") {
            $l = parse_url($url)["scheme"]."://".parse_url($url)["host"].$l;
        } else if (substr($l, 0, 2) == "//") {
            $l = parse_url($url)["scheme"].":".$l;
        } else if (substr($l, 0, 2) == "./") {
            $l = parse_url($url)["scheme"]."://".parse_url($url)["host"].dirname(parse_url($url)["path"]).substr($l, 1);
        } else if (substr($l, 0, 1) == "#") {
            $l = parse_url($url)["scheme"]."://".parse_url($url)["host"].parse_url($url)["path"].$l;
        } else if (substr($l, 0, 3) == "../") {
            $l = parse_url($url)["scheme"]."://".parse_url($url)["host"]."/".$l;
        } else if (substr($l, 0, 11) == "javascript:") {
            continue;
        } else if (substr($l, 0, 5) != "https" && substr($l, 0, 4) != "http") {
            $l = parse_url($url)["scheme"]."://".parse_url($url)["host"]."/".$l;
        }
        // If the link isn't already in our crawl array add it, otherwise ignore it.
        if (!in_array($l, $already_crawled)) {
                $already_crawled[] = $l;
                $crawling[] = $l;
                // Output the page title, descriptions, keywords and URL. This output is
                // piped off to an external file using the command line.
                echo get_details($l)."\n";
        }
    }
    // Remove an item from the array after we have crawled it.
    // This prevents infinitely crawling the same page.
    array_shift($crawling);
    // Follow each link in the crawling array.
    foreach ($crawling as $site) {
        follow_links($site);
    }
}
// Begin the crawling process by crawling the starting link first.
follow_links($start);

为了让您了解我已经做出了一些努力,这是我所做的:

<?php

require_once 'rollingcurlx.class.php';

function get_details($url) {

    $post_data = null;
    //$user_data = null;
    $options = array(CURLOPT_SSL_VERIFYPEER => FALSE, CURLOPT_SSL_VERIFYHOST => FALSE);
  // $headers = array('http'=>array('method'=>"GET", 'headers'=>"User-Agent: chegSpider/0.1\n"));
    $RollingCurlX = new RollingCurlX(10000);
    $RollingCurlX->setOptions($options);
    $RollingCurlX->setTimeout(86400000) //86400 milliseconds => 1 jour;
    $RollingCurlX->setHeaders(array('http'=>array('method'=>"GET", 'headers'=>"User-Agent: chegSpider/0.1\n")));

  $RollingCurlX->addRequest($url, $post_data);
/*
  // The array that we pass to stream_context_create() to modify our User Agent.
  $options = array('http'=>array('method'=>"GET", 'headers'=>"User-Agent: chegSpider/0.1\n"));
  // Create the stream context.
  $context = stream_context_create($options);
*/
  // Create a new instance of PHP's DOMDocument class.
  $doc = new DOMDocument();
  // @$doc->loadHTML(@file_get_contents($url, false, $context));
  @$doc->loadHTML(@$RollingCurlX->execute());
  $pageDownloadedHtml = @$doc->saveHTML();
  // fread($pageDownloadedHtml);

  // Get all of the lang Attribute in HTML tag.
      $langPage = $doc->getElementsByTagName("html");
      $lang = $langPage->getAttribute("lang");

      // Create an array of all of the title tags.
      $title = $doc->getElementsByTagName("title");
      // There should only be one <title> on each page, so our array should have only 1 element.
      $title = $title->item(0)->nodeValue;
      // Give $description and $keywords no value initially. We do this to prevent errors.
      $description = "";
      $keywords = "";
      // Create an array of all of the pages <meta> tags. There will probably be lots of these.
      $metas = $doc->getElementsByTagName("meta");
      // Loop through all of the <meta> tags we find.
      for ($i = 0; $i < $metas->length; $i++) {
        $meta = $metas->item($i);
        // Getthe keywords.
        if (strtolower($meta->getAttribute("name")) == "keywords")
          $keywords = $meta->getAttribute("content");

      }

}

请帮助我完善我所知道的代码,目前该代码非常粗糙,以便:

  1. 每天一次通过库https://github.com/marcushat/RollingCurlX下载10000个URL。

  2. 与DOMDocument组合以检索我尝试从第30行到49行检索的每个下载URL的语言,标题和关键字。

谢谢。

0 个答案:

没有答案