2 Replies Latest reply: Jun 10, 2011 12:52 PM by 811912 RSS

    ParallelGC threads taking 90% of CPU

      HW: N240 (4GB, 2CPU )
      JDK: 1.6.0_19
      VM options: -XX:MaxGCPauseMillis=3000 -XX:GCTimeRatio=19 -XX:+UseParallelGC -XX:+UseParallelOldGC, 2 parallel gc threads, Xms 256M , Xmx 2048

      We have an issue with parallelGC at some point time GC threads continuously running and they occupy most of the cpu. They never come down. Java process has not reached its Xmx value. Its throwing OOM exceptions. I think this is a 1.5 feature. To disable we need to add overhead limit option.

      But we don't understand the point where when its not reached to its Xmx, why should JVM through OOM error. Heap cannot grow at that point?

      Here is the gc thread's stack dump.

      ----------------- lwp# 3 / thread# 3 --------------------
      fe70a1bc __1cGBitMapLpar_set_bit6MI_b_ (fee3b71c, 68461d8, d08c38, 1, 72800000, d08c3b) + 3c
      feb3a114 __1cNParMarkBitMapImark_obj6MpnIHeapWord_I_b_ (fee3b714, aba30ec0, 18, 2, 0, 34230ec0) + 10
      fe8e6af8 __1cNinstanceKlassToop_follow_contents6MpnUParCompactionManager_pnHoopDesc__v_ (fee3b714, ae3e0, aba30ea8, aba30eb8, aba30ec0, 1000000) + 130
      feb1d89c __1cNobjArrayKlassToop_follow_contents6MpnUParCompactionManager_pnHoopDesc__v_ (aba29fa0, ae3e0, aba30ea8, 77890848, 6, fee3b730) + 1dc
      feb56ec8 __1cUParCompactionManagerUdrain_marking_stacks6MpnKOopClosure__v_ (3, ae3ec, ae3e0, ae3e4, aba29d70, ae3e4) + 124
      feb4a964 __1cRMarkFromRootsTaskFdo_it6MpnNGCTaskManager_I_v_ (4fce8, a0, 0, 4, ae3e0, feb4a7a0) + 19c
      fe64c29c __1cMGCTaskThreadDrun6M_v_ (38000, 36740, feb4aef8, 4fce8, 386dc, 382f0) + 1c8
      feb2a198 java_start (38000, 358, fedfc000, fed4888d, 391d0, fee488b4) + 22c
      ff2c9910 lwpstart (0, 0, 0, 0, 0, 0)
      ----------------- lwp# 4 / thread# 4 --------------------
      fe8e6a88 __1cNinstanceKlassToop_follow_contents6MpnUParCompactionManager_pnHoopDesc__v_ (fee3b714, ce480, 825aec80, 825aec94, 825aed68, 2000) + c0
      feb56fb0 __1cUParCompactionManagerUdrain_marking_stacks6MpnKOopClosure__v_ (8b8, ce48c, ce480, ce484, 3, ce484) + 20c
      feb4ac6c __1cQStealMarkingTaskFdo_it6MpnNGCTaskManager_I_v_ (8f8f90, 1, 11, 1, ce480, 7777fdbc) + c0
      fe64c29c __1cMGCTaskThreadDrun6M_v_ (39400, 36758, feb4ae8c, 8f8f90, 39b64, 39778) + 1c8
      feb2a198 java_start (39400, 358, fedfc000, fed4888d, 3a658, fee488b4) + 22c
      ff2c9910 lwpstart (0, 0, 0, 0, 0, 0)
      ----------------- lwp# 5 / thread# 5 --------------------
      ff2cd6e4 lwp_cond_wait (140248, 140230, 0, 0)
      feb321b0 __1cCosNPlatformEventEpark6M_v_ (140230, 1, fed4a47b, b1800, 140200, fee4c748) + 100
      feb10afc __1cHMonitorFIWait6MpnGThread_x_i_ (8a8028, 13f800, 0, 8a8038, fee2d820, b8000) + dc
      feb1184c __1cHMonitorEwait6Mblb_b_ (8a8028, 13f800, 0, 0, fee, 3d400) + 350
      fe5e1664 __1cUWaitForBarrierGCTaskIwait_for6M_v_ (8a8028, 157f34e8, 1, fee50694, 54694, 8f8fc0) + 58
      feb5e158 __1cRPSParallelCompactNmarking_phase6FpnUParCompactionManager_b_v_ (ee520, fee3b07c, fee3b000, 2, 18, 774ff808) + 334
      feb5d408 __1cRPSParallelCompactQinvoke_no_policy6Fb_v_ (9ca40, 4fcb0, 36a00, 34c00, 35f70, 34a78) + 4fc
      feb6319c __1cKPSScavengeGinvoke6F_v_ (34a78, 12, 37cc8, 34ac0, 34af0, a) + 15c
      feb3ef90 __1cUParallelScavengeHeapTfailed_mem_allocate6MIb_pnIHeapWord__ (34a78, 6, 0, 34a78, a, 9) + 88
      fe5e40a4 __1cbDVM_ParallelGCFailedAllocationEdoit6M_v_ (70c7f284, 34a78, 9, 34ac0, 8, 34af0) + 7c
      fe5e00d0 __1cMVM_OperationIevaluate6M_v_ (70c7f284, 9ca40, fedfc000, fee, 7f738d, 3d400) + 80
      fec50e54 __1cIVMThreadSevaluate_operation6MpnMVM_Operation__v_ (9ca40, 70c7f284, 4fcb0, 7f738d, 4fcb8, fedfc000) + cc
      fec513ec __1cIVMThreadEloop6M_v_ (0, fe6fb994, 31af0, 0, 4bc00, 13ee28) + 45c
      fe65f9c0 __1cIVMThreadDrun6M_v_ (13f800, 3c800, fee38a3c, fedfc000, 3ca3c, 3c800) + 98
      feb2a198 java_start (13f800, 358, fedfc000, fed4888d, 140548, fee488b4) + 22c
      ff2c9910 lwpstart (0, 0, 0, 0, 0, 0)

      Even if the system is idle these threads are continuously taking cpu.

      Heap description from thread dump.

      PSYoungGen total 80960K, used 39396K [0xd1000000, 0xde800000, 0xfbc00000)
      eden space 28992K, 100% used [0xd1000000,0xd2c50000,0xd2c50000)
      from space 51968K, 20% used [0xd2d40000,0xd3769290,0xd6000000)
      to space 95808K, 0% used [0xd8a70000,0xd8a70000,0xde800000)
      ParOldGen total 1400832K, used 1400831K [0x7b800000, 0xd1000000, 0xd1000000)
      object space 1400832K, 99% used [0x7b800000,0xd0fffd18,0xd1000000)
      PSPermGen total 28672K, used 26807K [0x77800000, 0x79400000, 0x7b800000)
      object space 28672K, 93% used [0x77800000,0x7922dee8,0x79400000)

      Any thoughts or idea's why ParallelGC threads will take this much cpu.
        • 1. Re: ParallelGC threads taking 90% of CPU
          When the heap is almost full, the GC spends more of its time looking at and copying objects which are kept than finding objects to remove.

          Say you can have 1 million objects in your system and you are 10 % full, this means 10% need be examined and you will have 90% free afterwards given you a long time before the next GC.

          Say you have have a system which is 90% full, this means 9x as many objects need to be examined (taking 9x longer) and only 10% will be freed giving you little time before the next GC is required. If your program is creating objects at the same time, you can fill the freed memory as fast as it is freed and you will be constantly GCing.
          • 2. Re: ParallelGC threads taking 90% of CPU

            I agree with your point no issues with that.

            But here is my doubt. Scenario what I explained why not JVM expand its heap. Because its not reached to its maximum. I understand GC is busy collecting data, but it may or may not be able to collect or there is nothing to collect. So my only point why not JVM grow. Even after when it reached its maximum if it could'nt make any room for new objects in that scenario if its fails with OOM then there is no issue. That point of time we can say see we have any leaks in app or the app might need more memory.

            Thank u