HBASE区域服务器OOM并关闭

时间:2015-07-06 07:21:42

标签: hbase

我在其中看到以下异常跟踪。 有人能告诉我究竟出了什么问题吗?

regionserver.HRegionServer: Run out of memory; HRegionServer will abort itself immediately
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at org.apache.hadoop.hbase.io.ByteBufferOutputStream.checkSizeAndGrow(ByteBufferOutputStream.java:74)
at org.apache.hadoop.hbase.io.ByteBufferOutputStream.write(ByteBufferOutputStream.java:112)
at org.apache.hadoop.hbase.KeyValue.oswrite(KeyValue.java:2873)
at org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueEncoder.write(KeyValueCodec.java:59)
at org.apache.hadoop.hbase.ipc.IPCUtil.buildCellBlock(IPCUtil.java:120)
at org.apache.hadoop.hbase.ipc.RpcServer$Call.setResponse(RpcServer.java:377)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:113)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:744)

1 个答案:

答案 0 :(得分:3)

好的,我们弄清楚了。 有一个内存密集型作业,并且未设置hbase.client.scanner.max.result.size,旧版本中默认设置为Integer.MAX。