超出OutOfMemory GC开销限制以获取对log4j对象的锁定

时间:2011-06-01 09:20:31

标签: java log4j jdk1.6

任何人都可以帮我确定确切的问题。它是JVM,Log4j还是我们应用程序中的其他东西?

我们在Solaris 10(SUNW,Sun-Fire-V240)服务器上使用JDK 1.6.0.24运行多线程应用程序。哪个有RMI调用来与客户端通信。

我们的应用程序被绞死了。我在threaddump中看到了下面的OutOfMemory。但是,我知道这是因为GC只能占据对象内存的2%。

  # java.lang.OutOfMemoryError: GC overhead limit exceeded
    Heap
     PSYoungGen      total 304704K, used 154560K [0xe0400000, 0xfbc00000, 0xfbc00000)
      eden space 154560K, 100% used [0xe0400000,0xe9af0000,0xe9af0000)
      from space 150144K, 0% used [0xf2960000,0xf2960000,0xfbc00000)
      to   space 145856K, 0% used [0xe9af0000,0xe9af0000,0xf2960000)
     PSOldGen        total 897024K, used 897023K [0xa9800000, 0xe0400000, 0xe0400000)
      object space 897024K, 99% used [0xa9800000,0xe03ffff0,0xe0400000)
     PSPermGen       total 28672K, used 27225K [0xa3c00000, 0xa5800000, 0xa9800000)
      object space 28672K, 94% used [0xa3c00000,0xa5696580,0xa5800000)

在我的情况下应该是因为GC无法从大量等待线程中声明内存。如果我看到线程转储。大多数线程都在等待获取org.apache.log4j.Logger上的锁。使用log4j-1.2.15

如果您在下面看到第一个线程的跟踪。它获取2个对象的锁定,其他线程(~50)正在等待获取锁定。几乎相同的痕迹可以看到20分钟。

这是线程转储:

     pool-3-thread-51" prio=3 tid=0x00a38000 nid=0xa4 runnable [0xa0d5f000]
        java.lang.Thread.State: RUNNABLE
       at java.text.DateFormat.format(DateFormat.java:316)
      at org.apache.log4j.helpers.PatternParser$DatePatternConverter.convert(PatternParser.java:443)
    at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)
    at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)
    at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
    at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
    at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
    - locked  (a org.apache.log4j.RollingFileAppender)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    - locked  (a org.apache.log4j.Logger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:296)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)





"Timer-3" prio=3 tid=0x0099a800 nid=0x53 waiting for monitor entry [0xa1caf000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)
    - waiting to lock  (a org.apache.log4j.RollingFileAppender)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    - locked  (a org.apache.log4j.spi.RootLogger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.controlapp.export.AbstractOMDataCollector.run(AbstractOMDataCollector.java:100)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)



"TrapHandlerThreadPool:Thread-10" prio=3 tid=0x014dac00 nid=0x4f waiting for monitor entry [0xa1d6f000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)
    - waiting to lock  (a org.apache.log4j.RollingFileAppender)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    - locked  (a org.apache.log4j.Logger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.faultServer.db.ConnectionPool.printDataSourceStats(ConnectionPool.java:146)
    at com.airvana.faultServer.db.SQLUtil.freeConnection(SQLUtil.java:267)
    at com.airvana.faultServer.db.DbAPI.addEventOrAlarmOptimized(DbAPI.java:904)
    at com.airvana.faultServer.eventProcessing.EventProcessor.processEvent(EventProcessor.java:24)
    at com.airvana.faultServer.filters.BasicTrapFilter.processTrap(BasicTrapFilter.java:80)
    at com.airvana.faultServer.eventEngine.EventEngine.notifyTrapProcessors(EventEngine.java:314)
    at com.airvana.faultServer.eventEngine.NodewiseTrapQueue.run(NodewiseTrapQueue.java:94)
    at com.airvana.common.utils.ThreadPool$PoolThread.run(ThreadPool.java:356)


"RMI TCP Connection(27927)-10.193.3.41" daemon prio=3 tid=0x0186c800 nid=0x1d53 waiting for monitor entry [0x9f84e000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)
    - waiting to lock  (a org.apache.log4j.RollingFileAppender)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    - locked  (a org.apache.log4j.Logger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.faultServer.processCommunications.ConfigAppCommReceiver.sendEvent(ConfigAppCommReceiver.java:178)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
    at sun.rmi.transport.Transport$1.run(Transport.java:159)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
    at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

"pool-3-thread-49" prio=3 tid=0x01257800 nid=0xa1 waiting for monitor entry [0xa0def000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.Category.callAppenders(Category.java:204)
    - waiting to lock  (a org.apache.log4j.Logger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.processSeqNumber(NioNotificationHandler.java:548)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:301)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:223)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:149)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipe


"pool-3-thread-44" prio=3 tid=0x00927800 nid=0x9b waiting for monitor entry [0xa0f0f000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.Category.callAppenders(Category.java:204)
    - waiting to lock  (a org.apache.log4j.Logger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:296)
    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:223)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:149)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:221)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)
    at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:76)
    at org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor$ChildExecutor.run(OrderedMemoryAwareThreadPoolExecutor.java:314)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

3 个答案:

答案 0 :(得分:8)

由于GC开销限制导致的OutOfMemoryError在JVM确定运行垃圾收集器花费的时间过长时发生。这是堆几乎已满的经典标志。

如果堆太满,JVM会花费越来越多的时间来收集垃圾,以回收减少的内存量。剩下相应的有效工作时间百分比会减少。

我的假设是您的记录器正在备份,因为GC运行之间没有足够的时间来应对记录速率。因此,大量被阻塞的线程是次要症状,而不是问题的根本原因。


假设上述情况属实,短期修复是使用JVM选项重新启动应用程序以获得更大的堆大小。您还可以更改GC开销阈值,以便您的应用程序更快地死亡。 (这可能看起来很奇怪,但是你的应用可能会更快地死亡,而不是在几分钟或几小时的时间内停滞不前。)

真正的解决方法是追踪堆空间不足的原因。您需要启用GC日志记录并在应用程序运行时观察内存使用趋势...超过数小时,数天,数周。如果您注意到内存使用量长期增长,则可能存在某种内存泄漏。您需要使用内存分析器来跟踪它。

答案 1 :(得分:2)

此问题说明似乎与bug 41214类似。我不确定这是否与您的问题有关,但是您发布的堆栈跟踪中的一些元素与该bug中的元素类似。

因此,您可能希望跟进Stephen的建议,以验证是否有来自多个线程的太多记录器调用导致大量锁争用。将记录器的级别切换到更高级别可能会有所帮助,但如果您需要日志条目,则不建议这样做;我会建议经过仔细考虑后再使用。

答案 2 :(得分:0)

到目前为止,我们还没有看到这个问题。

在将JDK从JDK1.6.0.20升级到JDK1.6.0.24后,我不确定问题是否已经缩小了。