我正在尝试使用Netty 4编写一个简单的代理。我对框架很新,所以我真的很感激一些建议/帮助。
现在的想法是通过localhost端口连接到Netty代理,然后代理将HttpRequest转发到Web上的远程主机(主机不重要)。
它的工作方式是我在localhost端口26000上启动代码,将我的浏览器重定向到那里,我希望得到我在本地主机上发送的GET Http请求:26000被重定向到远程主机我在代码中硬编码,并将远程主机的Http响应返回给我的浏览器。
我的想法是使用Netty的频道处理程序,在代理的两边构造FullHttpRequest和FullHttpResponse。现在,我只能对代理的传入请求执行此操作。其余的通道处理程序使用ByteBuf。我的代码基于在Netty的github帐户上找到的HexDumpProxy示例。
这是我的传入请求管道
pipeline.addLast( new HttpRequestDecoder() );
pipeline.addLast("aggregator", new HttpObjectAggregator(64 * 1024));
pipeline.addLast(new LoggingHandler(LogLevel.DEBUG), new ProxyFrontendHandler(remoteHost, remotePort));
我的入站处理程序类
class ProxyFrontendHandler extends SimpleChannelInboundHandler<FullHttpRequest>
在代理的客户端,这是我传递的唯一渠道处理程序
.handler(new ProxyBackendHandler(inboundChannel))
我的客户端类用于处理来自我发出的请求的远程主机响应,使用ByteBuf
class ProxyBackendHandler extends SimpleChannelInboundHandler<ByteBuf>
为了接收和处理FullHttpResponse对象,我怎样才能增强我的管道?
同样,我对Netty很陌生,所以请原谅我的代码错误,不幸的是,我有点想尝试一下试错法,看看哪些有用,哪些无效。
如果有人想查看我的代码,请点击此处。它工作正常,但我说的管道配置很少。
public final class NettyProxy {
private final int LOCAL_PORT, REMOTE_PORT, THREAD_NUMB;
private final String REMOTE_HOST;
private EventLoopGroup bossGroup = null;
private EventLoopGroup workerGroup = null;
public NettyProxy(int LOCAL_PORT, String REMOTE_HOST, int REMOTE_PORT, int THREAD_NUMB) throws Exception {
this.LOCAL_PORT = LOCAL_PORT;
this.REMOTE_HOST = REMOTE_HOST;
this.REMOTE_PORT = REMOTE_PORT;
this.THREAD_NUMB = THREAD_NUMB;
System.err.println("Proxying *:" + this.LOCAL_PORT + " to " + this.REMOTE_HOST + ':' + this.REMOTE_PORT + " ...");
// Configure the bootstrap.
bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup(this.THREAD_NUMB);
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ProxyInitializer(this.REMOTE_HOST, this.REMOTE_PORT))
.childOption(ChannelOption.AUTO_READ, false) // necessary!! Why after the handler??
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.SO_REUSEADDR, true)
.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024)
.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.bind(this.LOCAL_PORT).sync().channel().closeFuture().sync();
} finally {
shutdown();
}
}
public void shutdown() throws InterruptedException {
bossGroup.shutdownGracefully().sync();
workerGroup.shutdownGracefully().sync();
}
}
我的服务器端管道初始化程序类
class ProxyInitializer extends ChannelInitializer<SocketChannel> {
private final String remoteHost;
private final int remotePort;
public ProxyInitializer(String remoteHost, int remotePort) {
this.remoteHost = remoteHost;
this.remotePort = remotePort;
}
@Override
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline(); // pipeline : how we organize our communication
pipeline.addLast("readTimeoutHandler", new ReadTimeoutHandler(60)); // timeout after 60 secs
pipeline.addLast( new HttpRequestDecoder() ); // constructs FullHttpRequest (inbound channel)
pipeline.addLast("aggregator", new HttpObjectAggregator(64 * 1024)); // mergers message parts into FullHttpRequest / FullHttpResponse
pipeline.addLast(new LoggingHandler(LogLevel.DEBUG), new ProxyFrontendHandler(remoteHost, remotePort));
}
}
我的服务器端入站处理程序就是这个:
class ProxyFrontendHandler extends SimpleChannelInboundHandler<FullHttpRequest> {
private final String remoteHost;
private final int remotePort;
EmbeddedChannel ch = null;
private volatile Channel outboundChannel;
public ProxyFrontendHandler(String remoteHost, int remotePort) {
super(false); // does not release! However, it is channelRead0 that releases later on
this.remoteHost = remoteHost;
this.remotePort = remotePort;
}
@Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new LoggingHandler(LogLevel.DEBUG))
.handler(new ProxyBackendHandler(inboundChannel))
.option(ChannelOption.AUTO_READ, false)
.option(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 32 * 1024)
.option(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 8 * 1024)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.SO_REUSEADDR, true);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
inboundChannel.read(); // read the data that was sent by the client. This data will appear at channelRead
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
@Override // writes to the remote host the incoming Http request
public void channelRead0(final ChannelHandlerContext ctx, FullHttpRequest msg) throws UnsupportedEncodingException {
msg.headers().set("Host", remoteHost + ":" + remotePort );
ch = new EmbeddedChannel(new HttpRequestEncoder()); // converts FullHttpRequest to ByteBuf, pass ByteBuf to writeAndFlush
ch.writeOutbound(msg);
ByteBuf encoded = (ByteBuf) ch.readOutbound();
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(encoded).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
System.out.println("wrote something");
ctx.channel().read();
} else {
future.cause().printStackTrace();
future.channel().close();
}
}
});
} else {
System.out.println("outbound channel is NOT active!!");
outboundChannel.close();
}
}
@Override
public void channelInactive(ChannelHandlerContext ctx) {
if( ch != null ) { ch.close(); }
if (outboundChannel != null) {
System.out.println("Closing inactive channel");
closeOnFlush(outboundChannel);
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
closeOnFlush(ctx.channel());
}
// Closes the specified channel after all queued write requests are flushed.
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
}
}
最后,这是我从我正在连接的远程主机获得的响应的处理程序
class ProxyBackendHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final Channel inboundChannel;
public ProxyBackendHandler(Channel inboundChannel) {
super(false);
this.inboundChannel = inboundChannel;
}
@Override
public void channelActive(ChannelHandlerContext ctx) {
System.out.println("*** channelActive()");
ctx.read();
}
@Override // Read response from remote host here
public void channelRead0(final ChannelHandlerContext ctx, ByteBuf msg) {
System.out.println("*** channelRead()");
inboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.cause().printStackTrace();
future.channel().close();
}
}
});
}
@Override
public void channelInactive(ChannelHandlerContext ctx) {
ProxyFrontendHandler.closeOnFlush(inboundChannel);
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ProxyFrontendHandler.closeOnFlush(ctx.channel());
}
}