搜索特定字符串的远程网页

时间:2011-05-02 19:39:43

标签: php

我想创建一个PHP脚本,该脚本将转到另一个网站(给定一个URL),并检查该页面的页面源是否有特定的数据字符串。

我实际上现在有办法做到这一点,但寻找另一种方式。

现在我正在使用file_get_contents php函数将URL的页面源读入变量。

$link = "www.example.com";
$linkcontents = file_get_contents($link);

然后我使用strpos php函数在页面中搜索我正在寻找的字符串:

$needle = "<div>find me</div>";
if (strpos($linkcontents, $needle) == false) {
echo "String not found";
} else {
echo "String found";
}

我听说cURL命令适用于处理与URL有关的事情,我只是不确定如何使用它来执行我正在使用的file_get_contents和strpos函数组合,就像我上面提到的那样

或者,如果还有其他方法可以做到,我全都听见了: - )

3 个答案:

答案 0 :(得分:1)

我们构建一个像这样的CURL函数

function Visit($irc_server){
// Open the connection
        $user_agent = $_SERVER['HTTP_USER_AGENT'];
        $port = '80';
        $ch = curl_init();    // initialize curl handle
        curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
        curl_setopt($ch, CURLOPT_URL, $irc_server); 
        curl_setopt($ch, CURLOPT_FAILONERROR, 1);          
        curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);    
        curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); 
        curl_setopt($ch, CURLOPT_TIMEOUT, 50); 
        curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
        curl_setopt($ch, CURLOPT_PORT, $port);          

        $data = curl_exec($ch);
        $httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        $curl_errno = curl_errno($ch);
        $curl_error = curl_error($ch);
        if ($curl_errno > 0) {
              $return = ("cURL Error ($curl_errno): $curl_error\n");
        } else {
               $return = $data;
        }
        curl_close($ch);
         /*if($httpcode >= 200 && $httpcode < 300){
           $return = 'OK';
       }else{
            $return ='Nok';
        }*/

        return $return;

}

处理我们网址的另一个功能

function tenta($url){
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process. 

$crawler = new MyCrawler();


// URL to crawl
$crawler->setURL($url);

// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");

// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);

// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);

// Thats enough, now here we go
$crawler->go();

// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();

if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
 /*   
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb; */
}

我们构建我们的课程

// It may take a whils to crawl a site ...
set_time_limit(110000); 
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");

// Extend the class and override the handleDocumentInfo()-method 
class MyCrawler extends PHPCrawler 
{
  function handleDocumentInfo($DocInfo) 
  {
      global $find;

    // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
    if (PHP_SAPI == "cli") $lb = "\n";
    else $lb = "<br />";

    // Print the URL and the HTTP-status-Code
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
    //echo $img_url = '<img src="'.$DocInfo->url.'.jpg" width = "150" height = "150" />'.$lb;

    //we looking for kenya on this domain
    foreach ($find as $matche) {
        $matchb = implode(',',$matche);
    //$matchb = $matche['word'];
    if(preg_match("/(".$matchb.")/i", Visit($DocInfo->url))) { 
    echo "<a href=".$DocInfo->url." target=_blank>".$DocInfo->url."</a><b style='color:red;'>".$matche['word']."</b>".$lb;
    }
        }
    // Print the refering URL
    echo "Referer-page: ".$DocInfo->referer_url.$lb;

    // Print if the content of the document was be recieved or not
    if ($DocInfo->received == true)
      echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
    else
      echo "Content not received".$lb; 

    // Now you should do something with the content of the actual
    // received page or file ($DocInfo->source), we skip it in this example 

    echo $lb;

    flush();
  } 
}

我们在数组中的变量 我们将抓取网址。

$url = array(
  array("id"=>7, "name"=>"soltechit","url" => "soltechit.co.uk"),
  array("id"=>5, "name"=>"CNN","url" => "cnn.com", "description" => "A social utility that connects people, to keep up with friends, upload photos, share links")
);
strings we are looking for
$find = array(
  array("word" => "routers"),
  array("word" => "Moose"),
  array("word" => "worm"),
  array("word" => "kenya"),
  array("word" => "alshabaab"),
  array("word" => "ISIS"),
  array("word" => "security"),
  array("word" => "windows 10 release"),
  array("word" => "hacked")
);

我们称之为

foreach ($url as $urls) {
$url = $urls['url'];
echo '<h2>'.$urls['name'].'</h2>';
echo $urls['description'].'<br>';
echo tenta($url).'<br>';

}

答案 1 :(得分:0)

如果file_get_contents适用于手头的任务,为什么要改变任何东西......?我说继续使用它。

请注意,您需要传递一个以“http://”开头的网址,否则会尝试打开名为“www.example.com”的本地文件。

同样最好使用=== false执行strpos,否则将无法识别位置0的匹配(因为0 == false而不是0 === false

答案 2 :(得分:-2)

更好的是here我想这会有所帮助:

a + b - c