如果一个网址是404,curl_multi_exec会停止,我该如何更改?

时间:2010-04-27 21:24:58

标签: php curl curl-multi

目前,如果连接的网址不起作用,我的cURL multi exec会停止,所以有几个问题:

1:为什么会停止?这对我没有意义。

2:我怎样才能继续?

编辑:这是我的代码:

    $SQL = mysql_query("SELECT url FROM shells") ;
    $mh = curl_multi_init();
    $handles = array();
    while($resultSet = mysql_fetch_array($SQL)){       
            //load the urls and send GET data                     
            $ch = curl_init($resultSet['url'] . $fullcurl); 
            //Only load it for two seconds (Long enough to send the data)
            curl_setopt($ch, CURLOPT_TIMEOUT, 5);           
            curl_multi_add_handle($mh, $ch);
            $handles[] = $ch;
    }

    // Create a status variable so we know when exec is done.
    $running = null;
    //execute the handles
    do {
      // Call exec.  This call is non-blocking, meaning it works in the background.
      curl_multi_exec($mh,$running);
      // Sleep while it's executing.  You could do other work here, if you have any.
      sleep(2);
    // Keep going until it's done.
    } while ($running > 0);

    // For loop to remove (close) the regular handles.
    foreach($handles as $ch)
    {
      // Remove the current array handle.
      curl_multi_remove_handle($mh, $ch);
    } 
    // Close the multi handle
    curl_multi_close($mh);
`

2 个答案:

答案 0 :(得分:7)

你走了:

$urls = array
(
    0 => 'http://bing.com',
    1 => 'http://yahoo.com/thisfiledoesntexistsoitwill404.php', // 404
    2 => 'http://google.com',
);

$mh = curl_multi_init();
$handles = array();

foreach ($urls as $url)
{
    $handles[$url] = curl_init($url);

    curl_setopt($handles[$url], CURLOPT_TIMEOUT, 3);
    curl_setopt($handles[$url], CURLOPT_AUTOREFERER, true);
    curl_setopt($handles[$url], CURLOPT_FAILONERROR, true);
    curl_setopt($handles[$url], CURLOPT_FOLLOWLOCATION, true);
    curl_setopt($handles[$url], CURLOPT_RETURNTRANSFER, true);
    curl_setopt($handles[$url], CURLOPT_SSL_VERIFYHOST, false);
    curl_setopt($handles[$url], CURLOPT_SSL_VERIFYPEER, false);

    curl_multi_add_handle($mh, $handles[$url]);
}

$running = null;

do {
    curl_multi_exec($mh, $running);
    usleep(200000);
} while ($running > 0);

foreach ($handles as $key => $value)
{
    $handles[$key] = false;

    if (curl_errno($value) === 0)
    {
        $handles[$key] = curl_multi_getcontent($value);
    }

    curl_multi_remove_handle($mh, $value);
    curl_close($value);
}

curl_multi_close($mh);

echo '<pre>';
print_r(array_map('htmlentities', $handles));
echo '</pre>';

返回:

Array
(
    [http://bing.com] => <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html...
    [http://yahoo.com/thisfiledoesntexistsoitwill404.php] => 
    [http://google.com] => <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"><title>Google</title>...
)

正如您所看到的那样,所有网址都被提取,甚至是位于404 Yahoo页面之后的Google.com。

答案 1 :(得分:0)

我没有平台来测试这个,但我见过的大多数例子都比较了curl_multi_exec返回的常量而不是检查$ running变量。

//execute the handles
do {
  // Call exec.  This call is non-blocking, meaning it works in the background.
  $mrc = curl_multi_exec($mh,$running);
  // Sleep while it's executing.  You could do other work here, if you have any.
  sleep(2);
// Keep going until it's done.
} while ($mrc == CURLM_CALL_MULTI_PERFORM);

我希望这适合你。