所以,我有一个简单的Flask API应用程序,运行在gunicorn上运行龙卷风工作者。 gunicorn命令行是:
gunicorn -w 64 --backlog 2048 --keep-alive 5 -k tornado -b 0.0.0.0:5005 --pid /tmp/gunicorn_api.pid api:APP
当我从另一台服务器直接针对gunicorn运行Apache Benchmark时,以下是相关结果:
ab -n 1000 -c 1000 'http://****:5005/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second: 2823.71 [#/sec] (mean)
Time per request: 354.144 [ms] (mean)
Time per request: 0.354 [ms] (mean, across all concurrent requests)
Transfer rate: 2669.29 [Kbytes/sec] received
所以我们的性能接近3k reqs / sec。
现在,我需要SSL。所以我正在运行nginx作为反向代理。以下是同一服务器上针对nginx的相同基准测试:
ab -n 1000 -c 1000 'https://****/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second: 355.16 [#/sec] (mean)
Time per request: 2815.621 [ms] (mean)
Time per request: 2.816 [ms] (mean, across all concurrent requests)
Transfer rate: 352.73 [Kbytes/sec] received
这是87.4%的表现下降。但对于我的生活,我无法弄清楚我的nginx设置有什么问题。这是:
upstream sdn_api{
server 127.0.0.1:5005;
keepalive 100;
}
server {
listen [::]:443;
ssl on;
ssl_certificate /etc/ssl/certs/api.sdninja.com.crt;
ssl_certificate_key /etc/ssl/private/api.sdninja.com.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!kEDH:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
ssl_session_cache shared:SSL:10m;
server_name api.*****.com;
access_log /var/log/nginx/sdn_api.log;
location / {
proxy_pass http://sdn_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 100M;
client_body_buffer_size 1m;
proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;
}
}
我的nginx.conf:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip off;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
所有有人都知道为什么这个配置运行得这么慢?谢谢!
答案 0 :(得分:2)
大部分HTTPS开销都在握手中。将-k传递给ab以启用持久连接。你会发现基准测试现在明显加快了。