我是Netty的新手并使用版本4.在我的项目中,服务器向客户端返回一个Java对象,该对象可能很大。为此,我首先使用 ObjectEncoder / 解码器和 NioSocketChannel 。虽然它可以工作,但性能明显比旧的阻塞IO差。线程转储显示ObjectEncoder一直重新分配直接缓冲区。我的猜测是它是在直接缓冲区中序列化整个对象,然后才通过网络发送它。这很慢,如果有多个请求同时运行,可能会导致OutOfMemoryError。您对高效实施的建议是什么?这种建议会很快并且使用有限大小的缓冲区?此外,服务器返回的一些(但不是全部)对象包含一个长字节数组字段。这个事实可以用来进一步提高绩效吗?
正如@MattBakaitis所问,我正在粘贴代码示例,这是对ObjectEchoServer示例的略微修改。它会将一个常量大对象发送回客户端,以响应收到的消息。
public final class MyObjectEchoServer {
static final int PORT = Integer.parseInt(System.getProperty("port", "11000"));
public static void main(String[] args) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(
new ObjectEncoder(),
new ObjectDecoder(Integer.MAX_VALUE, ClassResolvers.cacheDisabled(null)),
new ObjectEchoServerHandler());
}
});
// Bind and start to accept incoming connections.
b.bind(PORT).sync().channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
public class ObjectEchoServerHandler extends ChannelInboundHandlerAdapter {
public static class Response implements Serializable {
public byte[] bytes;
}
private static Response response;
static {
int len = 256 * 1024 * 1024;
response = new Response();
response.bytes = new byte[len];
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
System.out.println("Received: msg=" + msg);
// Echo back the received object to the client.
System.out.println("Sending response. length: " + response.bytes.length);
ctx.write(response);
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
System.out.println("Flushing");
ctx.flush();
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
如果JVM有足够的内存,它没有错误,但是如果多个客户端正在运行或响应对象太大,它会很慢并抛出一个Direct缓冲区OutOfMemeoryError。我做了多个线程转储,它们总是喜欢我下面粘贴的那个并显示ObjectEncoder在直接缓冲区中写入响应对象,并在响应很大时不断调整此缓冲区的大小。因此,我认为这种直接的实施效率不高,并且正在寻找有效方法的建议。
我提到的线程堆栈:
"nioEventLoopGroup-3-1" prio=10 tid=0x000000000bf88800 nid=0x205c runnable [0x000000000cb5e000]
java.lang.Thread.State: RUNNABLE
at sun.misc.Unsafe.copyMemory(Native Method)
at sun.misc.Unsafe.copyMemory(Unsafe.java:560)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:326)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.capacity(UnpooledUnsafeDirectByteBuf.java:160)
at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:818)
at io.netty.buffer.ByteBufOutputStream.write(ByteBufOutputStream.java:66)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1847)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1333)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1173)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
at io.netty.handler.codec.serialization.ObjectEncoder.encode(ObjectEncoder.java:47)
at io.netty.handler.codec.serialization.ObjectEncoder.encode(ObjectEncoder.java:36)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:657)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:715)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:650)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
at io.netty.example.objectecho.ObjectEchoServerHandler.channelRead(ObjectEchoServerHandler.java:46)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at java.lang.Thread.run(Thread.java:745)
答案 0 :(得分:1)
如果您正在编写一个大对象,正如您在问题中提到的那样,可能会发生多个内存副本以扩展输出缓冲区。要解决该问题,您可以覆盖allocateBuffer()
中的ObjectEncoder
方法(MessageToByteEncoder
为正确),并分配具有更高初始容量的缓冲区。例如:
@Override
protected ByteBuf allocateBuffer(
ChannelHandlerContext ctx, Object msg, boolean preferDirect) {
return ctx.alloc().heapBuffer(1048576);
}
为了进一步减少内存副本的数量,我建议使用直接缓冲区(即ctx.alloc().directBuffer(1048576)
)并使用PooledByteBufAllocator
。
但是,由于许多对等方交换大型对象,这无法解决您因加载OutOfMemoryError
而引起的担忧。从未实现Java对象序列化以适应非阻塞连接;它始终假定流有数据。如果不重新实现对象序列化,则不能将整个对象保留在缓冲区中。实际上,这适用于仅使用InputStream
和OutputStream
的其他对象序列化实现。
或者,您可以实现替换协议以交换大型流,并通过使用对流的一些引用来减小对象的大小。
答案 1 :(得分:1)
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(bos);
oos.writeObject(obj);
oos.close();
return bos.toByteArray();
} catch (Exception e) {
e.printStackTrace();
}
return null;
然后反序列化:
ByteArrayInputStream ins = new ByteArrayInputStream(b);
ObjectInputStream ois = new ObjectInputStream(ins);
return ois.readObject();
平均对象大小为~800字节,并且使用了1GBPS链接。这将适用于每秒数千个事务。现在,如果你想让速度增加到每秒几个lacs,你需要付出一些努力并自己序列化这些对象 - 兴奋!
以下是一个示例:
我的对象 - &gt;
public class MyObject{
private int i;
private String key;
private AnotherObject obj;
private boolean toUpdate;
public MyObject(int i,String key, AnotherObject obj, boolean toUpdate) {
this.i=i;
this.key = key;
this.obj= obj;
this.toUpdate = toUpdate;
}
/**
* Decode in order:
int i
String key
AnotherObject obj
boolean toUpdate
*
*/
public Object decodeAll(ByteBuf b) {
this.i = readInt(b);
this.key = readString(b);
if (b.readableBytes() > 1) {
AnotherObject obj= new AnotherObject ();
this.value = AnotherObject .decodeAll(b);
}
byte[] bool = new byte[1];
b.readBytes(bool);
this.toUpdate = decodeBoolean(bool);
return this;
}
/**
* Encode in order:
int i
String key
AnotherObject obj
boolean toUpdate
*
*/
public ByteBuf encodeAll() {
ByteBuf buffer = allocator.buffer();
//First the int
writeInt(buffer, this.i);
// String key
writeString(buffer, this.key);
// AnotherObject
if (this.value != null) {
ByteBuf rel = this.value.encodeAll();
buffer.writeBytes(rel);
rel.release();
}
// boolean toUpdate
buffer.writeBytes(encodeBoolean(this.toUpdate));
return buffer;
}
}
这里我使用了sun.misc.Unsafe API来序列化/反序列化:
protected byte[] encodeBoolean(final boolean value) {
byte[] b = new byte[1];
unsafe.putBoolean(b, byteArrayOffset, value);
return b;
}
public static boolean decodeBoolean(final byte[] b) {
boolean value = unsafe.getBoolean(b, byteArrayOffset);
return value;
}
protected byte[] encodeInt(final int value) {
byte[] b = new byte[4];
unsafe.putInt(b, byteArrayOffset, value);
return b;
}
protected void writeString(ByteBuf buffer, String data) {
if (data == null) {
data = new String();
}
// For String , first write the length as integer and then actual data
buffer.writeBytes(encodeInt(data.getBytes().length));
buffer.writeBytes(data.getBytes());
}
public static String readString(ByteBuf data) {
byte[] temp = new byte[4];
data.readBytes(temp);
int len = decodeInt(temp);
if (len == 0) {
return null;
} else {
temp = new byte[len];
data.readBytes(temp);
return new String(temp);
}
}
同样,您也可以编码/解码任何java数据结构。即。
protected void writeArrayList(ByteBuf buffer, ArrayList<String> data) {
if (data == null) {
data = new ArrayList<String>();
}
// Write the number of elements and then all elements one-by-one
buffer.writeBytes(encodeInt(data.size()));
for (String str : data) {
writeString(buffer, str);
}
}
protected ArrayList<String> readArrayList(ByteBuf data) {
byte[] temp = new byte[4];
data.readBytes(temp);
int noOfElements = decodeInt(temp, 0);
ArrayList<String> arr = new ArrayList<>(noOfElements);
for (int i = 0; i < noOfElements; i++) {
arr.add(readString(data));
}
return arr;
}
现在您可以简单地将ByteBuf写入通道,并且每秒可以获得大量事务。