Tornado,Nginx,Apache ab - apr_socket_recv:通过对等方重置连接(104)

时间:2012-05-16 21:47:47

标签: python nginx amazon-ec2 tornado load-testing

我在c1.medium实例上运行nginx和tornado。

当我跑ab时,下面是我的输出。 Nginx不起作用。我试图调整ninx的配置文件无济于事。如果我通过传递nginx只运行一个端口,例如`

  http://127.0.0.1:8050/pixel?tt=ff` 
然后它很快。看到最底层。这必须是一个nginx问题,所以如何解决?以下是nginx的conf文件。

root@ip-10-130-167-230:/etc/service# ab -n 10000 -c 50 http://127.0.0.1/pixel?tt=ff
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 9100 requests completed

这应该吸烟,但事实并非如此。

我设置了以下parmamerts

ulimit is at 100000

# General gigabit tuning:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
# this gives the kernel more memory for tcp
# which you need with many (100k+) open socket connections
net.ipv4.tcp_mem = 50576   64768   98152
net.core.netdev_max_backlog = 2500

这是我的nginx conf:

user www-data;
worker_processes 1;  # 2*number of cpus
pid /var/run/nginx.pid;
worker_rlimit_nofile 32768;
events {
         worker_connections  30000;
         multi_accept on;
         use epoll;
}

http {
        upstream frontends {
          server 127.0.0.1:8050;
          server 127.0.0.1:8051;
        }
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # Only retry if there was a communication error, not a timeout
    # on the Tornado server (to avoid propagating "queries of death"
    # to all frontends)
    proxy_next_upstream error;

        server {
        listen   80;
        server_name 127.0.0.1;
                ##For tornado
                location / {
                    proxy_pass_header Server;
                    proxy_set_header Host $http_host;
                    proxy_redirect off;
                    proxy_set_header X-Real-IP $remote_addr;
                    proxy_set_header X-Scheme $scheme;
                    proxy_pass http://frontends;
                }

如果我通过传递nginx来运行ab:

ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff



root@ip-10-130-167-230:/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer# ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        TornadoServer/2.2.1
Server Hostname:        127.0.0.1
Server Port:            8050

Document Path:          /pixel?tt=ff
Document Length:        42 bytes

Concurrency Level:      1000
Time taken for tests:   52.436 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      31200000 bytes
HTML transferred:       4200000 bytes
Requests per second:    1907.08 [#/sec] (mean)
Time per request:       524.363 [ms] (mean)
Time per request:       0.524 [ms] (mean, across all concurrent requests)
Transfer rate:          581.06 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  411 1821.7      0   21104
Processing:    23   78 121.2     65    5368
Waiting:       22   78 121.2     65    5368
Total:         53  489 1845.0     65   23230

Percentage of the requests served within a certain time (ms)
  50%     65
  66%     69
  75%     78
  80%     86
  90%    137
  95%   3078
  98%   3327
  99%   9094
 100%  23230 (longest request)


2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"

在ab:

上使用-v 10选项时的Oupput
GIF89a
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Wed, 16 May 2012 21:56:50 GMT
Content-Type: image/gif
Content-Length: 42
Connection: close
Etag: "d5fceb6532643d0d84ffe09c40c481ecdf59e15a"
Server: TornadoServer/2.2.1
Set-Cookie: rtbhui=867bccde-2bc0-4518-b422-8673e07e19f6; Domain=rtb.rtbhui.com; expires=Fri, 16 May 2014 21:56:50 GMT; Path=/

2 个答案:

答案 0 :(得分:2)

我在使用webrick的sinatra应用程序上使用apache基准测试时遇到了同样的问题。我找到了答案here

它实际上是Apache服务器的问题。

该错误已在更高版本的apache中删除。尝试下载here

答案 1 :(得分:0)

我有同样的问题,并在日志中查找信息我得到了这一行:

Oct 15 10:41:30 bal1 kernel: [1031008.706185] nf_conntrack: table full, dropping packet.
Oct 15 10:41:31 bal1 kernel: [1031009.757069] nf_conntrack: table full, dropping packet.
Oct 15 10:41:32 bal1 kernel: [1031009.939489] nf_conntrack: table full, dropping packet.
Oct 15 10:41:32 bal1 kernel: [1031010.255115] nf_conntrack: table full, dropping packet.

在我的特定情况下,conntrack模块在iptables中使用,因为同一台服务器有防火墙。

一个解决方案是卸载conntrack模块,另一个很容易就是在防火墙策略中应用了这两行:

iptables -t raw -I PREROUTING -p tcp  -j NOTRACK
iptables -t raw -I OUTPUT -p tcp  -j NOTRACK