curl_easy和curl_multi之间没有区别

时间:2014-02-02 18:17:30

标签: c++ libcurl

我正在使用libcurl从我的C ++程序向我的PHP脚本执行HTTP请求 下面的第一个easy_版本运行良好,但它很慢(localhost每秒12个请求)。
没什么奇怪的 - 我使用ab -n 1000 -c 1获得了类似的结果。
另一方面,ab -n 1000 -c 100每秒600请求执行得更好。
问题是,使用libcurl multi似乎并不是并发的。我只使用了略微修改的示例代码,结果也是大约12 req / s。

我理解curl_multi吗?我怎样才能达到类似于ab的结果?
PS。我知道这两个代码都有点不同,但几乎整个时间花在了卷曲工作上。

easy_ way:

CURL *curl;
CURLcode response;              // HTTP response

curl = curl_easy_init();

if(curl)
{


    curl_easy_setopt(curl, CURLOPT_URL, "http://localhost/process.php");

    while(true)
    {
        if(!requestsQueue.empty())              
        {
            mtx.lock();
            string data = requestsQueue.front();                                            
            requestsQueue.pop();
            mtx.unlock();

            const char *post = data.c_str();                                    //convert string to char used by CURL

            curl_easy_setopt(curl, CURLOPT_POSTFIELDS, post);                   

            do
            {
                response = curl_easy_perform(curl);
            } while(response != CURLE_OK);

        }
        else
        {
            //there are no request to perform, so wait for them
            cout << "Sleeping...\n";
            sleep(2);
            continue;
        }
    }

    //curl_easy_cleanup(curl);          
}
else
{
    cout << "CURL init failed!\n";
}

多种方式:

CURLM *multi_handle;
int still_running; /* keep number of running handles */

/* init a multi stack */
multi_handle = curl_multi_init();

/* add the individual transfers */
for(int i=1;i<=300;i++)
{
    CURL *handle;
    handle = curl_easy_init();
    curl_easy_setopt(handle, CURLOPT_URL, "http://localhost/process.php");
    curl_multi_add_handle(multi_handle, handle);
}

 /* we start some action by calling perform right away */
  curl_multi_perform(multi_handle, &still_running);

 do {
struct timeval timeout;
int rc; /* select() return code */

fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd = -1;

long curl_timeo = -1;

FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);

/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;

curl_multi_timeout(multi_handle, &curl_timeo);
if(curl_timeo >= 0) {
  timeout.tv_sec = curl_timeo / 1000;
  if(timeout.tv_sec > 1)
    timeout.tv_sec = 1;
  else
    timeout.tv_usec = (curl_timeo % 1000) * 1000;
}

/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);

/* In a real-world program you OF COURSE check the return code of the
   function calls.  On success, the value of maxfd is guaranteed to be
   greater or equal than -1.  We call select(maxfd + 1, ...), specially in
   case of (maxfd == -1), we call select(0, ...), which is basically equal
   to sleep. */

rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);

switch(rc) {
case -1:
  /* select error */
  break;
case 0:
default:
  /* timeout or readable/writable sockets */
  curl_multi_perform(multi_handle, &still_running);
  break;
}
} while(still_running);

curl_multi_cleanup(multi_handle);

curl_easy_cleanup(http_handle);

return 0;

1 个答案:

答案 0 :(得分:1)

curl_multi确实可以并行处理任意数量的传输,但它使用相同的单个线程完成所有工作。它有副作用,如果任何地方需要很长时间,该动作会阻止所有其他转移。

这种阻塞操作的一个例子,有时是导致你所描述的东西,是名称解析器阶段。在一个典型的* nix系统中,libcurl的默认解析器选择是标准的阻塞解决方案。

然而,你可以构建libcurl而不是使用c-ares或线程解析器后端来避免这种阻塞行为,而是更好地允许并发。