从XML文件中检索URL并将URL中的数据收集到我的数据库 - PHP / cURL / XML

时间:2014-06-14 15:48:30

标签: php xml curl

XML包含大约50,000个不同的URL,我试图从中收集数据,然后插入或更新我的数据库。

目前我正在使用此功能,但是由于处理了大量数据而导致其超时,但我如何才能提高性能:

URLs.xml(最多50,000个网址)

    <?xml version="1.0" encoding="utf-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
    <url>
        <loc>http://url.com/122122-rob-jones?</loc>
        <lastmod>2014-05-05T07:12:41+08:00</lastmod>
        <changefreq>monthly</changefreq>
        <priority>0.9</priority>
    </url>
    </urlset>

的index.php

    <?php
include 'config.php';
include 'custom.class.php';
require_once('SimpleLargeXMLParser.class.php');
$custom = new custom();

$xml = dirname(__FILE__)."/URLs.xml";

// create a new object
$parser = new SimpleLargeXMLParser();
// load the XML
$parser->loadXML($xml);

$parser->registerNamespace("urlset", "http://www.sitemaps.org/schemas/sitemap/0.9"); 
$array = $parser->parseXML("//urlset:url/urlset:loc"); 

for ($i=0, $n=count($array); $i<$n; $i++){

            $FirstURL=$array[$i];

            $URL = substr($FirstURL, 0, strpos($FirstURL,'?')) . "/";
            $custom->infoc($URL);
    }

custom.class.php(包含位)

    <?php
        public function load($url, $postData='')
        {

                $ch = curl_init();
                curl_setopt($ch, CURLOPT_URL, $url);
                curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
                curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
                curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6");
                curl_setopt($ch, CURLOPT_TIMEOUT, 60);
                curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
                curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
                curl_setopt($ch, CURLOPT_COOKIEJAR, "cookie.txt");
                curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
                curl_setopt($ch, CURLOPT_AUTOREFERER, true);
                if($postData != '') {
                    curl_setopt($ch, CURLOPT_POST, true);
                    curl_setopt($ch, CURLOPT_POSTFIELDS, $postData);
                    }
                curl_setopt($ch, CURLOPT_HTTPHEADER, array("X-Requested-With: XMLHttpRequest"));
                $result = curl_exec($ch);
                curl_close($ch);
                return $result;




        }

        public function infoc($url) {


        $get_tag = $this->load($url);   


        // Player ID

          $playeridTAG = '/<input type="text" id="player-(.+?)" name="playerid" value="(.+?)" \/>/';
        preg_match($playeridTAG, $get_tag, $playerID);      

        // End Player ID

        // Full Name
            preg_match("/(.+?)-(.+?)\//",$url, $title);
        $fullName = ucwords(preg_replace ("/-/", " ", $title[2]));  
        // End Full Name

        // Total    
        $totalTAG = '/<li>
                    <span>(.+?)<\/span><span class="none"><\/span>              <label>Total<\/label>
                <\/li>/';
        preg_match($totalTAG, $get_tag, $total);        
        // End Total        


        $query = $db->query('SELECT * FROM playerblank WHERE playerID = '.$playerID[1].'');
        if($query->num_rows > 0) {

        $db->query('UPDATE playerblank SET name = "'.$fullName.'", total = "'.$total[1].'" WHERE playerID = '.$playerID[1].'') or die(mysqli_error($db));

echo "UPDATED ".$playerID[1]."";

        }
        else {

        $db->query('INSERT INTO playerblank SET playerID = '.$playerID[1].', name = "'.$fullName.'", total = "'.$total[1].'"') or die(mysqli_error($db));

echo "Inserted ".$playerID[1]."";

        }




        }

?>

从XML文件中收集每个URL(loc)是没有问题的,当我尝试使用cURL为每个我正在努力做的URL而不需要等待很长时间来收集数据时。

3 个答案:

答案 0 :(得分:2)

尝试使用curl_multi。在PHP documentation中有一个goot示例

// create both cURL resources
$ch1 = curl_init();
$ch2 = curl_init();

// set URL and other appropriate options
curl_setopt($ch1, CURLOPT_URL, "http://lxr.php.net/");
curl_setopt($ch1, CURLOPT_HEADER, 0);
curl_setopt($ch2, CURLOPT_URL, "http://www.php.net/");
curl_setopt($ch2, CURLOPT_HEADER, 0);

//create the multiple cURL handle
$mh = curl_multi_init();

//add the two handles
curl_multi_add_handle($mh,$ch1);
curl_multi_add_handle($mh,$ch2);

$active = null;
//execute the handles
do {
    $mrc = curl_multi_exec($mh, $active);
} while ($mrc == CURLM_CALL_MULTI_PERFORM);

while ($active && $mrc == CURLM_OK) {
    if (curl_multi_select($mh) != -1) {
        do {
            $mrc = curl_multi_exec($mh, $active);
        } while ($mrc == CURLM_CALL_MULTI_PERFORM);
    }
}

//close the handles
curl_multi_remove_handle($mh, $ch1);
curl_multi_remove_handle($mh, $ch2);
curl_multi_close($mh);

答案 1 :(得分:0)

尝试使用XML文件的脱机副本,并删除已经更新或插入的URL,然后再次启动脚本,直到离线文件有URL。然后在需要时获取XML文件的新副本。

答案 2 :(得分:0)

“加载”功能中的问题:它阻止执行,直到单个网址准备就绪,同时您可以轻松地同时加载多个网址。 Here is explanation想法如何做到这一点。提高性能的最佳方法是并行加载几个(10-20)url并添加新的用于加载“on fly”,当之前完成之一。 ParallelCurl会做到这一点,例如:

require_once('parallelcurl.php');

// $max_requests = 10 or more, try to pick best value manually
$parallel_curl = new ParallelCurl($max_requests, $curl_options);

// $array - 50000 urls
$in_urls = array_splice($array, 0, $max_requests);
foreach ($in_urls as $url) {
    $parallel_curl->startRequest($url, 'on_request_done');
}

function on_request_done($content, $url, $ch, $search) {
    // here you can parse $content and save data to DB

    // and add next url for loading
    $next_url = array_shift($array);
    if($next_url) {
        $parallel_curl->startRequest($url, 'on_request_done');
    }
}

// This should be called when you need to wait for the requests to finish.
$parallel_curl->finishAllRequests();