我有一个有趣的问题,我不确定根本原因是什么。我有一台服务器和两个虚拟主机A和B,端口分别在80和81上运行。我在A上编写了一个简单的PHP代码,如下所示:
<?php
echo "from A server\n";
B上另一个简单的PHP代码:
<?php
echo "B server:\n";
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
echo $output;
使用ab
发出并发请求时,我得到以下结果:
ab -n 10 -c 5 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient).....done
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 5
Time taken for tests: 2.680 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 1720 bytes
HTML transferred: 260 bytes
Requests per second: 3.73 [#/sec] (mean)
Time per request: 1340.197 [ms] (mean)
Time per request: 268.039 [ms] (mean, across all concurrent requests)
Transfer rate: 0.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 2 1339 1408.8 2676 2676
Waiting: 2 1339 1408.6 2676 2676
Total: 3 1340 1408.8 2676 2677
Percentage of the requests served within a certain time (ms)
50% 2676
66% 2676
75% 2676
80% 2676
90% 2677
95% 2677
98% 2677
99% 2677
100% 2677 (longest request)
但是,使用并发级别1发出1000个请求的速度非常快:
$ ab -n 1000 -c 1 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 1
Time taken for tests: 1.659 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 172000 bytes
HTML transferred: 26000 bytes
Requests per second: 602.86 [#/sec] (mean)
Time per request: 1.659 [ms] (mean)
Time per request: 1.659 [ms] (mean, across all concurrent requests)
Transfer rate: 101.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 1 1 10.3 1 201
Waiting: 1 1 10.3 1 201
Total: 1 2 10.3 1 201
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 2
100% 201 (longest request)
任何人都可以解释为什么会这样吗?我真的想知道根本原因。这是一个卷曲的问题吗?它不像网络瓶颈或打开文件问题,因为并发性只是5.顺便说一句,我也尝试使用guzzlehttp做同样的事情,但结果是一样的。我在笔记本电脑上使用ab,服务器在同一个本地网络中。此外,它当然与网络带宽无关,因为主机A和B之间的请求是在localhost完成的。
我修改了代码,以便测试更灵活:
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
我尝试使用file_get_contents,但切换到file_get_contents时没有明显的区别。当并发性为1时,所有方法都很好。但是当并发性增加时,它们都会开始降级。
我想我找到了与此问题相关的内容,因此我只想发布另一个问题concurrent curl could not resolve host。这可能是根本原因,但我还没有任何答案。
经过这么久的努力,我认为这绝对与名称解析有关。这是可以在并发级别500执行的php脚本
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_PROXY, 'localhost');
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url, ['proxy' => 'localhost']);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
真正重要的是curl_setopt($ch, CURLOPT_PROXY, 'localhost');
和$response = $client->request('GET', $url, ['proxy' => 'localhost']);
。它告诉curl使用localhost作为代理。
这是ab测试的结果
ab -n 1000 -c 500 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 182 bytes
Concurrency Level: 500
Time taken for tests: 0.251 seconds
Complete requests: 1000
Failed requests: 184
(Connect: 0, Receive: 0, Length: 184, Exceptions: 0)
Non-2xx responses: 816
Total transferred: 308960 bytes
HTML transferred: 150720 bytes
Requests per second: 3985.59 [#/sec] (mean)
Time per request: 125.452 [ms] (mean)
Time per request: 0.251 [ms] (mean, across all concurrent requests)
Transfer rate: 1202.53 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 4.9 5 14
Processing: 9 38 42.8 22 212
Waiting: 8 38 42.9 22 212
Total: 11 44 44.4 31 214
Percentage of the requests served within a certain time (ms)
50% 31
66% 37
75% 37
80% 38
90% 122
95% 135
98% 207
99% 211
100% 214 (longest request)
但是,当不使用localhost作为代理时,为什么名称解析在并发级别5失败?
虚拟主机设置非常简单和干净,几乎所有设置都是默认配置。我不在这台服务器上使用iptables,也没有配置任何特殊的东西。
server {
listen 81 default_server;
listen [::]:81 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
找一些有趣的东西!如果你在第一个之后立即进行另一次ab测试,大约需要3秒钟。第二次ab测试很快。
不使用localhost作为代理
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only.
使用localhost作为代理
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
我认为这仍然意味着问题是名称解析。但是为什么?
假设:nginx没有收听localhost:81
我尝试将listen 127.0.0.1:81;
添加到nginx,但没有显示效果。
发现自己在使用curl代理方面犯了一些错误,无效!稍后更新其他详细信息。
解决,与代理无关,或任何其他内容。 php-fpm的pm.start_servers
中的根本原因是www.conf
。
答案 0 :(得分:1)
好的,经过这么多天试图解决这个问题,我终于找到了原因。并且它不是名称解析。我无法相信需要这么多天来追踪根本原因,即php-fpm pm.start_servers
中www.conf
的数量。最初,我将pm.start_servers的数量设置为3,这就是为什么对于localhost的 ab 测试在并发级别3之后总是变得更糟 。虽然php-cli没有有限数量的php进程的问题,因此,php-cli总是表现很好。在将pm.start_servers
增加到5之后,ab测试结果与php-cli一样快。如果这就是你的php-fpm很慢的原因,那么你也应该考虑改变pm.min_spare_servers
,pm.max_spare_servers
,pm.max_children
以及任何相关内容的数量。