我正在实现netty代理服务器,如下所示: 出现了http请求
我在与写入客户端相同的处理程序中很难从响应中提取byteBuf。
在下面的示例中,如果您看到channelRead
的{{1}}方法,您将看到我如何从缓存中获取并进行写入。我在遇到困难的下方用该方法添加了评论
此代码首尾相连。因此可以在本地复制和测试。
我可以在HexDumpProxyFrontendHandler
中看到FullHttpResponse
对象。但是在此方法中,我没有引用缓存,也没有要添加到缓存中的ID。
我认为可以通过两种方式解决此问题,但是我不清楚如何解决。
1)在HexdumpProxyBackendHandler中获取缓存引用和ID,然后变得很容易。但是在HexDumpProxyBackendhandler#channelRead
的{{1}}中实例化了hexDumpBackendhander
,这时我还没有解析传入的请求
2)获取在channelActive
中提取的响应bytebuf,在这种情况下,它只是缓存插入。
HexDumpProxy.java
HexDumpFrontendHandler
}
HexDumpProxyInitializer.java
HexdumpFrontendHandler#dchannelRead
}
HexDumpProxyFrontendHandler.java
public final class HexDumpProxy {
static final int LOCAL_PORT = Integer.parseInt(System.getProperty("localPort", "8082"));
static final String REMOTE_HOST = System.getProperty("remoteHost", "api.icndb.com");
static final int REMOTE_PORT = Integer.parseInt(System.getProperty("remotePort", "80"));
static Map<Long,String> localCache = new HashMap<>();
public static void main(String[] args) throws Exception {
System.err.println("Proxying *:" + LOCAL_PORT + " to " + REMOTE_HOST + ':' + REMOTE_PORT + " ...");
localCache.put(123L, "profile1");
localCache.put(234L, "profile2");
// Configure the bootstrap.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new HexDumpProxyInitializer(localCache, REMOTE_HOST, REMOTE_PORT))
.childOption(ChannelOption.AUTO_READ, false)
.bind(LOCAL_PORT).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
HexDumpProxyBackendHandler.java
public class HexDumpProxyInitializer extends ChannelInitializer<SocketChannel> {
private final String remoteHost;
private final int remotePort;
private Map<Long, String> cache;
public HexDumpProxyInitializer(Map<Long,String> cache, String remoteHost, int remotePort) {
this.remoteHost = remoteHost;
this.remotePort = remotePort;
this.cache=cache;
}
@Override
public void initChannel(SocketChannel ch) {
ch.pipeline().addLast(
new LoggingHandler(LogLevel.INFO),
new HttpServerCodec(),
new HttpObjectAggregator(8*1024, true),
new HexDumpProxyFrontendHandler(cache, remoteHost, remotePort));
}
}
P.S:我从netty-example项目中获取了大部分代码,并对其进行了自定义
编辑
根据Ferrygig的建议,我如下更改了FrontEndChannelHander#channelRead。我删除了channelActive并实现了write方法
@Override public void channelRead(final ChannelHandlerContext ctx,Object msg){
public class HexDumpProxyFrontendHandler extends ChannelInboundHandlerAdapter {
private final String remoteHost;
private final int remotePort;
private Channel outboundChannel;
private Map<Long, String> cache;
public HexDumpProxyFrontendHandler(Map<Long, String> cache, String remoteHost, int remotePort) {
this.remoteHost = remoteHost;
this.remotePort = remotePort;
this.cache = cache;
}
@Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler((new ChannelInitializer() {
protected void initChannel(Channel ch) {
ChannelPipeline var2 = ch.pipeline();
var2.addLast((new HttpClientCodec()));
var2.addLast(new HttpObjectAggregator(8192, true));
var2.addLast(new HexDumpProxyBackendHandler(inboundChannel));
}
}))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
@Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
System.out.println("msg is instanceof httpRequest");
HttpRequest req = (HttpRequest)msg;
QueryStringDecoder queryStringDecoder = new QueryStringDecoder(req.uri());
String userId = queryStringDecoder.parameters().get("id").get(0);
Long id = Long.valueOf(userId);
if (cache.containsKey(id)){
StringBuilder buf = new StringBuilder();
buf.append(cache.get(id));
writeResponse(req, ctx, buf);
closeOnFlush(ctx.channel());
return;
}
}
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
//get response back from HexDumpProxyBackendHander and write to cache
//basically I need to do cache.put(id, parse(response));
//how to get response buf from inboundChannel here is the question I am trying to solve
}
@Override
public void channelInactive(ChannelHandlerContext ctx) {
if (outboundChannel != null) {
closeOnFlush(outboundChannel);
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
closeOnFlush(ctx.channel());
}
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
}
//borrowed from HttpSnoopServerHandler.java in snoop example
private boolean writeResponse(HttpRequest request, ChannelHandlerContext ctx, StringBuilder buf) {
// Decide whether to close the connection or not.
boolean keepAlive = HttpUtil.isKeepAlive(request);
// Build the response object.
FullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1, request.decoderResult().isSuccess()? OK : BAD_REQUEST,
Unpooled.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));
response.headers().set(HttpHeaderNames.CONTENT_TYPE, "text/plain; charset=UTF-8");
if (keepAlive) {
// Add 'Content-Length' header only for a keep-alive connection.
response.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, response.content().readableBytes());
// Add keep alive header as per:
// - http://www.w3.org/Protocols/HTTP/1.1/draft-ietf-http-v11-spec-01.html#Connection
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
}
// Encode the cookie.
String cookieString = request.headers().get(HttpHeaderNames.COOKIE);
if (cookieString != null) {
Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode(cookieString);
if (!cookies.isEmpty()) {
// Reset the cookies if necessary.
for (io.netty.handler.codec.http.cookie.Cookie cookie: cookies) {
response.headers().add(HttpHeaderNames.SET_COOKIE, io.netty.handler.codec.http.cookie.ServerCookieEncoder.STRICT.encode(cookie));
}
}
} else {
// Browser sent no cookie. Add some.
response.headers().add(HttpHeaderNames.SET_COOKIE, io.netty.handler.codec.http.cookie.ServerCookieEncoder.STRICT.encode("key1", "value1"));
response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode("key2", "value2"));
}
// Write the response.
ctx.write(response);
return keepAlive;
}
答案 0 :(得分:1)
风暴
我可能是错的,当我阅读您的HexDumpProxyFrontendHandler
的这一部分时,我觉得有些不正确(我根据正确的风格在注释的前面加了句号,使它们可见):
// Not incorrect but better to have only one bootstrap and reusing it
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new HexDumpProxyBackendHandler(inboundChannel))
// I know what AUTO_READ false is, but my question is why you need it?
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
// Strange to me to try to get the channel while you did not test yet it is linked
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// Maybe you should start to send there, therefore getting the outboundChannel right there?
// add a log in order to see if you come there
// probably you have to send first, before asking to read anything?
// position (1)
inboundChannel.read();
} else {
inboundChannel.close();
}
}
});
// I suggest to move this in position named (1)
if (outboundChannel.isActive()) {
// maybe a log to see if anything will be written?
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
System.out.println("success!! - FrontEndHandler");
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
对我来说,您似乎没有等待打开频道。为了确保您确实发送了一些东西,在发送到电线时,您缺少一些日志(在日志中,我们只能看到连接已打开,然后主要是关闭,中间没有任何连接)。
也许更多日志可以帮助我们和您?
答案 1 :(得分:1)
有多种方法可以解决此问题,并且最终的最终目标也有所不同。
目前,您正在使用1个入站连接的拓扑结构是1个连接出站的拓扑,这使系统设计稍微容易一些,因为您不必担心将多个请求同步到同一出站流。
目前,您的前端处理程序扩展了ChannelInboundHandlerAdapter
,这只会拦截进入您应用程序的“数据包”,如果我们将其扩展为ChannelDuplexHandler
,我们也可以处理从应用程序。
要走这条路,我们需要更新HexDumpProxyFrontendHandler
类以扩展ChannelDuplexHandler
(现在称为CDH)。
该过程的下一步是覆盖write
中的CDH方法,以便我们可以在后端将响应发送回给我们时进行拦截。
创建write方法后,我们需要通过调用put
方法来更新(非线程安全的)映射。
public class HexDumpProxyFrontendHandler extends ChannelDuplexHandler {
Long lastId;
// ...
@Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
System.out.println("msg is instanceof httpRequest");
HttpRequest req = (HttpRequest)msg;
QueryStringDecoder queryStringDecoder = new QueryStringDecoder(req.uri());
String userId = queryStringDecoder.parameters().get("id").get(0);
Long id = Long.valueOf(userId);
lastId = id; // Store ID of last request
// ...
}
// ...
}
// ...
public void write(
ChannelHandlerContext ctx,
java.lang.Object msg,
ChannelPromise promise
) throws java.lang.Exception {
if (msg instanceof FullHttpResponse) {
System.out.println("this is fullHttpResponse");
FullHttpResponse full = (FullHttpResponse)msg;
cache.put(lastId, parse(full)); // TODO: Include a system here to convert the request to a string
}
super.write(ctx, msg, promise);
}
// ...
}
在这里我们还没有完成代码,但是我们仍然需要在代码的其他地方修复一些错误。
非线程安全映射(严重错误)
其中一个错误是您使用普通的哈希映射来处理缓存。问题是这不是线程安全的,如果多个人同时连接到您的应用,则可能会发生奇怪的事情,包括随着地图内部结构的更新而完全损坏地图。
为解决此问题,我们将映射“升级”到ConcurrentHashMap
,该映射具有特殊的结构,可以处理多个同时请求和存储数据的线程,而不会造成巨大损失在性能上。 (如果主要关注性能,则可以通过使用每个线程的哈希映射而不是全局缓存来可能获得更高的性能,但这意味着每个资源都可以缓存到线程数量。
没有缓存删除规则(重大错误)
目前,没有任何代码可以删除过时的资源,这意味着缓存将被填满,直到程序没有内存,然后它就会崩溃。
这可以通过使用既提供线程安全访问又提供所谓的删除规则的map实现或使用Gnuava caches之类的预制缓存解决方案来解决。
无法正确处理HTTP流水线(次要主要错误)
HTTP鲜为人知的功能之一是pipelining,这基本上意味着客户端可以向服务器发送另一个请求,不等待上一个请求的响应。这种类型的错误包括服务器,这些服务器会交换两个请求的内容,甚至完全处理它们。
尽管如今管道请求越来越少,HTTP2支持越来越多,并且知道那里存在损坏的服务器,但是使用它的某些CLI工具仍然会发生这种情况。
要解决此问题,请仅在发送上一个响应后读取请求,其中一种方法是保留请求列表,或寻求更高级的pre-make solutions