我使用siege对我的手工构建的文件服务器进行压力测试,它对于小文件(小于1KB)非常有效,而在使用1MB文件进行测试时,它无法按预期工作。
以下是使用小文件进行测试的结果:
neevek@~$ siege -c 1000 -r 10 -b http://127.0.0.1:9090/1KB.txt
** SIEGE 2.71
** Preparing 1000 concurrent users for battle.
The server is now under siege.. done.
Transactions: 10000 hits
Availability: 100.00 %
Elapsed time: 9.17 secs
Data transferred: 3.93 MB
Response time: 0.01 secs
Transaction rate: 1090.51 trans/sec
Throughput: 0.43 MB/sec
Concurrency: 7.29
Successful transactions: 10000
Failed transactions: 0
Longest transaction: 1.17
Shortest transaction: 0.00
以下是使用1MB文件进行测试的结果:
neevek@~$ siege -c 1000 -r 10 -b http://127.0.0.1:9090/1MB.txt
** SIEGE 2.71
** Preparing 1000 concurrent users for battle.
The server is now under siege...[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
[error] socket: unable to connect sock.c:222: Connection reset by peer
[error] socket: unable to connect sock.c:222: Connection reset by peer
[error] socket: unable to connect sock.c:222: Connection reset by peer
[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
[error] socket: unable to connect sock.c:222: Connection reset by peer
[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
[error] socket: read error Connection reset by peer sock.c:460: Connection reset by peer
当siege
因上述错误而终止时,我的文件服务器仍会使用固定数量的WRITABLE
SelectionKey旋转,即Selector.select()
会一直返回固定数字,例如50。
通过上述测试,我认为我的文件服务器不能接受不超过50个并发连接,因为当使用小文件运行测试时,我注意到服务器select
s 1或2 SelectionKeys,当使用大文件运行时,select
每次最多50个。
我试图在没有任何帮助的情况下增加backlog
中的Socket.bind()
。
问题可能是什么原因?
修改
更多信息:
使用1MB文件进行测试时,我注意到siege
以Broken pipe
错误终止,文件服务器只接受 198 连接,但我指定了1000个并发连接x 10轮(1000 * 10 = 10000)以充斥服务器。
编辑2
我已使用以下代码(单个类)进行测试以重现相同的问题,在此代码中,我只接受连接,我不读取或写,siege
客户端在连接超时之前以Connection reset
或Broken pipe
错误终止。我还注意到Selector只能选择少于1000个键。您可以尝试下面的代码来见证问题。
public class TestNIO implements Runnable {
ServerSocketChannel mServerSocketChannel;
Selector mSelector;
public static void main(String[] args) throws Exception {
new TestNIO().start();
}
public TestNIO () throws Exception {
mSelector = Selector.open();
}
public void start () throws Exception {
mServerSocketChannel = ServerSocketChannel.open();
mServerSocketChannel.configureBlocking(false);
mServerSocketChannel.socket().bind(new InetSocketAddress(9090));
mServerSocketChannel.socket().setSoTimeout(150000);
mServerSocketChannel.register(mSelector, SelectionKey.OP_ACCEPT);
int port = mServerSocketChannel.socket().getLocalPort();
String serverName = "http://" + InetAddress.getLocalHost().getHostName() + ":" + port;
System.out.println("Server start listening on " + serverName);
new Thread(this).start();
}
@Override
public void run() {
try {
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
while (true) {
int num = mSelector.select();
System.out.println("SELECT = " + num + "/" + mSelector.keys().size());
if (num > 0) {
Iterator<SelectionKey> keys = mSelector.selectedKeys().iterator();
while (keys.hasNext()) {
final SelectionKey key = keys.next();
if (key.isValid() && key.isAcceptable()) {
accept(key);
}
}
// clear the selected keys
mSelector.selectedKeys().clear();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void accept (SelectionKey key) throws IOException {
SocketChannel socketChannel = mServerSocketChannel.accept();
socketChannel.configureBlocking(false);
socketChannel.socket().setSoTimeout(1000000);
socketChannel.socket().setKeepAlive(true);
// since we are connected, we are ready to READ
socketChannel.register(mSelector, SelectionKey.OP_READ);
}
}
答案 0 :(得分:1)
它实际上与为ServerSocketChannel
设置的默认积压值相关您可以通过将积压值作为第二个参数传递给bind方法来解决问题。
mServerSocketChannel.socket()。bind(new InetSocketAddress(9090),“backlog value”)
答案 1 :(得分:0)
检查打开文件(文件描述符)数量的ulimit和硬限制
我猜你正在使用linux。你可以查看limits.conf /etc/security/limits.conf
答案 2 :(得分:0)
此问题可能与我的代码无关,我对本地运行的nginx服务器(MacOSX)运行相同的测试,发生了同样的错误。所以它很可能与硬件或siege
客户端有关。