我已经设置了一个nginx服务器,通过该服务器,请求通过nginx proxy_pass(在/etc/nginx/default.d/my.conf中配置)转发到AWS ELB。 在实例上运行的AWS ELB tomcat应用程序的背后。 在使用Jmeter执行负载测试(15k-20k吞吐量)之后,它可以正常工作,但是在提高吞吐量值之后,它给出了以下上游错误:
2019/06/06 15:54:32 [错误] 31483#0:* 924684 connect()在连接到上游时失败(110:连接超时),客户端:172.29.94.68,服务器:_,请求: “ GET / example1 / model / v2 / 4SBWFWMMFWKWDWFS178S94DFDE / imageurl HTTP / 1.1”,上游:“ https://XX.XX.XX.XX:443/example1/model/v2/4SBWFWMMFWKWDWFS178S94DFDE/imageurl”,主机:“ myapp.example.com”
我已经尝试在nginx.conf文件中进行许多更改,下面显示了完整的conf文件详细信息。 重新启动或重新加载nginx服务时,问题已解决。但这不是永久解决方案。
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 8;
error_log /var/log/nginx/error.log debug;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 50000;
use epoll;
}
worker_rlimit_nofile 30000;
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
# Remove the Connection header if the client sends it,
# it could be "close" to close a keepalive connection
proxy_set_header Connection "";
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Nginx服务应连续处理高吞吐量值。