如何在出站通道(来自ChannelPoolMap)和入站通道(如应用程序之类的代理)之间共享eventLoop

时间:2015-06-08 07:54:33

标签: netty

参考http://normanmaurer.me/presentations/2014-twitter-meetup-netty/slides.html#20.0,对于类似应用程序的代理(HTTP client -> proxy-server (netty) -> remote HTTP server),如何在eventLoop之间共享outbound-channel(从代理服务器的ChannelPoolMap获取 - > ;远程HTTP服务器)和inbound-channel(HTTP客户端 - >代理服务器)?

ChannelPoolMap impl看起来像:

val bootstrap = new Bootstrap()
if (sys.props.get("os.name").get == "Linux") {
  bootstrap.group(new EpollEventLoopGroup())
  bootstrap.channel(classOf[EpollSocketChannel])
} else {
  bootstrap.group(new NioEventLoopGroup())
  bootstrap.channel(classOf[NioSocketChannel])
}
bootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
val connectionsPoolMap = new AbstractChannelPoolMap[InetSocketAddress, SimpleChannelPool]() {
  override protected def newPool(key: InetSocketAddress): SimpleChannelPool = {
    return new SimpleChannelPool(bootstrap.remoteAddress(key), new CountingChannelPoolHandler)
  }
}

在入站通道处理程序中,channelActive看起来像

override def channelActive(ctx: ChannelHandlerContext) = {
    val inboundChannel = ctx.channel
    val pool = poolMap.iterator.next.getValue
    outboundChannel = pool.acquire.sync.getNow
    if (outboundChannel.pipeline.get("sbh") != null) {
      outboundChannel.pipeline.remove("sbh")
    }
    outboundChannel.pipeline.addLast("sbh", new SBH(inboundChannel))
    inboundChannel.read()
}

在这里,如何与inboundChannel.eventLoop()共享outboundChannel,以便两个连接的频道的所有IO都由同一个线程处理? 如果我不使用ChannelPoolMap,那么我可以创建Bootstrap并将inboundChannel.eventLoop()分配给Bootstrap#group,就像在幻灯片#20中解释它一样

有什么想法?感谢

1 个答案:

答案 0 :(得分:0)

这是一个对我有用的解决方案:

设置代理服务器端:

bossGroup = new NioEventLoopGroup(1);
workerGroup = new NioEventLoopGroup(NUMB_OF_THREADS);
ServerBootstrap serverBootstrap = new ServerBootstrap();        
serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class);
// Add more options for serverBootstrap if you want

ChannelPool需要使用EventLoopGroup进行实例化,如果您正在构建代理,IT必须是服务器使用的EventLoopGroup之一。我们的想法是使用ChannelPools填充地图,其中每个ChannelPool对应于来自workerGroup的线程。为此,我将poolMap的底层线程用作poolMap的键。

ChannelPoolMap<EventExecutor, SimpleChannelPool> poolMap = new AbstractChannelPoolMap<EventExecutor, SimpleChannelPool>() {
   @Override
   protected SimpleChannelPool newPool(EventExecutor key) {

      Bootstrap clientBootstrap = new Bootstrap();
      clientBootstrap.group((EventLoopGroup) key).channel(NioSocketChannel.class); 
      // add more options related to clientBootstrap here... 

      return new SimpleChannelPool(clientBootstrap.remoteAddress(REMOTE_HOST, REMOTE_PORT), new ChannelPoolHandler() { // implement abstract methods });
   }
}

现在,通过为workerGroup中的每个EventExecutors添加一个新的ChannelPool来填充poolMap

Iterator<EventExecutor> iter = workerGroup.iterator();
while (iter.hasNext()) { // will add to the poolMap a Connection Pool, one for each thread of the workerGroup
    poolMap.get(iter.next());
}

最后,在处理传入连接的类中,您将获得相应的通道池。您将获得inboundChannel的基础EventLoopGroup,并将其用作获取相应通道池的密钥。

final Future<Channel> futChan = poolMap.get(inboundChannel.eventLoop()).acquire();

就个人而言,我使用Netty的git存储库中的HexDumpProxy示例开始构建我自己的代理版本。