我正在使用Netty服务器,我遇到的问题是我创建的自定义处理程序,用于通过HTTP PUT请求接收文件上传。当我一次发送几个文件时,一切似乎工作正常,但是在大约300个连接之后服务器似乎“破坏”。然后,服务器将在每个收到的请求上抛出follow异常。在此开始发生后,服务器不再处理请求并需要重新启动:
java.lang.IllegalStateException: cannot send more responses than requests
at org.jboss.netty.handler.codec.http.HttpContentEncoder.writeRequested(HttpContentEncoder.java:104)
at org.jboss.netty.handler.execution.ExecutionHandler.handleDownstream(ExecutionHandler.java:165)
at org.jboss.netty.channel.Channels.write(Channels.java:605)
at org.jboss.netty.channel.Channels.write(Channels.java:572)
....
这是我的处理程序源channelRecieved,我处理的所有请求都是分块的,所以我将在下面包含这些方法:
@Override
public void messageReceived(ChannelHandlerContext context, MessageEvent event) throws Exception {
try {
log.trace("Message recieved");
if (newMessage) {
log.trace("New message");
HttpRequest request = (HttpRequest) event.getMessage();
setDestinationFile(context, request);
newMessage = false;
if (request.isChunked()) {
log.trace("Chunked request, set readingChunks true and create byte buffer");
requestContentStream = new ByteArrayOutputStream();
readingChunks = true;
return;
} else {
log.trace("Request not chunked");
writeNonChunkedFile(request);
requestComplete(event);
return;
}
} else if (readingChunks){
log.trace("Reading chunks");
HttpChunk chunk = (HttpChunk) event.getMessage();
if (chunk.isLast()) {
log.trace("Read last chunk");
readingChunks = false;
writeChunkedFile();
requestComplete(event);
return;
} else {
log.trace("Buffering chunk content to byte buffer");
requestContentStream.write(chunk.getContent().array());
return;
}
// should not happen
} else {
log.error("Error handling of MessageEvent, expecting a new message or a chunk from a previous message");
}
} catch (Exception ex) {
log.error("Exception: [" + ex + "]");
sendError(context, INTERNAL_SERVER_ERROR);
}
}
这就是我编写分块请求的方式:
private void writeChunkedFile() throws IOException {
log.trace("Writing chunked file");
byte[] data = requestContentStream.toByteArray();
FileOutputStream fos = new FileOutputStream(destinationFile);
fos.write(data);
fos.close();
log.debug("File upload complete, [chunked], path: [" + destinationFile.getAbsolutePath() + "] size: [" + destinationFile.length() + "] bytes");
}
这是我发送响应并关闭连接的方式:
private void requestComplete(MessageEvent event) {
log.trace("Request complete");
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
Channel channel = event.getChannel();
ChannelFuture cf = channel.write(response);
cf.addListener(ChannelFutureListener.CLOSE);
}
我在requestComplete中尝试过一些东西,其中一个只是channel.close(),似乎没什么帮助。还有其他想法或想法吗?
这是我的管道:
@Override
public ChannelPipeline getPipeline() throws Exception {
final ChannelPipeline pipeline = pipeline();
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("deflater", new HttpContentCompressor());
pipeline.addLast("ExecutionHandler", executionHandler);
pipeline.addLast(“handler”,new FileUploadHandler()); 回流管道; }
感谢您的任何想法或想法
编辑:在管道中的deflator和handler之间进行日志记录时的示例日志条目:
2012-03-23T07:46:40.993 [New I/O server worker #1-6] WARN NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 100 Continue]
2012-03-23T07:46:40.995 [New I/O server worker #1-6] WARN NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain; charset=UTF-8]
2012-03-23T07:46:41.000 [New I/O server worker #1-7] DEBUG NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.messageReceived] [] - Received [PUT /a/deeper/path/testFile.txt HTTP/1.1
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
Host: 192.168.0.1:8080
Accept: */*
Content-Length: 256000
Expect: 100-continue
答案 0 :(得分:0)
这最终导致我的实现出现问题,与此处发布的任何代码无关,此处发布的逻辑似乎合理且工作正常。也就是说,非常感谢所有人的有益评论!