我正在使用以下方案对nginx / node.js拓扑进行基准测试:
对于这两个基准," wrk"与以下配置一起使用:
wrk -t12 -c20 -d20s --timeout 2s
所有node.js实例都相同。在每个http GET请求中,它们迭代给定的数字" n"并在每个循环上增加一个变量。
当我执行测试用例时,我得到了下面列出的一些令人惊讶的结果。我不明白,为什么双node.js设置(拓扑2)在100万次迭代中表现更差 - 它甚至比拓扑1上相同的100万次循环更糟糕。
1037 req / s(单一)与813 req / s(LB)
我当然希望有一点开销,因为单个操作在node.js实例前面没有nginx - 但是测试结果看起来很奇怪。
具有10和500万次迭代的调用似乎没有问题,因为吞吐量的增加是预期的。
这种行为有合理的解释吗?
测试在一台计算机上执行; 每个node.js实例正在侦听不同的端口。
Nginx使用标准配置,只有:
Scenario 1 (single node.js server): n [millions] req/s avg/max [ms] requests 10 134 87.81/166.28 2633 5 271 44.12/88.48 5413 1 1037 11.48/24.99 20049
Scenario 2 (nginx as load balancer in front of 2 node.js servers): n [millions] req/s avg/max [ms] requests 10 220 51.95/124.87 4512 5 431 27.79/152.93 8376 1 813 6.85/35.64 16156 --> ???
答案 0 :(得分:0)
I have been diggin'... and it is probably related to NGINX Default config. not being efficient enough...
Using HTTP/1.1 spares the overhead of establishing a connection between nginx and node.js with every proxied request and has a significant impact on response latency.
So this could be one of the reasons if you are using HTTP/1.0 (NGINX Default)
Interesting feature : Keepalive
Sets the maximum number of idle keepalive connections to upstream servers that are retained in the cache per one worker process
Sources :
http://www8.org/w8-papers/5c-protocols/key/key.html#SECTION00050000000000000000
https://engineering.gosquared.com/optimising-nginx-node-js-and-networking-for-heavy-workloads