I'm running a Java server with JRockit 1.5.0. This is an "event processing" application: it reads events on a socket (SNMP port 162) and processes them by saving them to a database and then forwarding a JMS message (this is a client to a JBoss 4.2.2 server). So there are a lot of threads and a lot of socket I/O, between the incoming SNMP traps and the outgoing JDBC and JMS communication.
I recently started seeing some OutOfMemory errors in the logs for this process like the following:
java.lang.OutOfMemoryError: allocLargeObjectOrArray - Object size: 9616, Num elements: 4798
java.lang.OutOfMemoryError: mmAllocArray - Object size: 192, Num elements: 44
java.lang.OutOfMemoryError: allocLargeObjectOrArray - Object size: 65552, Num elements: 65536
I googled around and found the following as a way to resolve this issue:
I added these options, but still see some OutOfMemoryErrors. Seems like the large object and malloc complaints are gone, this time there are only complaints about "nativeGetNewTLA".
All processing stops once these OutOfMemoryErrors occur; nothing more is written to the database, and no more messages are written to JMS.
What can I do to fix this? Raise the tlaSize more? If so, how do I know how much to raise it to? Is there any good documentation on what this TLA feature is and under what conditions it starts operating? The total system that I work on consists of a number of Java processes, all doing a combination of network I/O, database access, and JMS, but this is the only process where I've seen these OutOfMemoryErrors occur.
I'd like to understand more about how Thread Local Areas work, as well as how to fix this problem.
Any help would be much appreciated...
java version "1.5.0_19"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_19-b02)
BEA JRockit(R) (build R27.6.5-32_o-121899-1.5.0_19-20091001-2113-linux-ia32, compiled mode)
Also, maybe the "Best Practices" JRockit documentation can help you tune your JVM. Have a look if you haven't come across it yet: [http://download.oracle.com/docs/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/bestpractices.html]
We are also facing the same issue with JRockit R27.6 and RHEL 5 and Weblogic 10.3.1. Our Production Environment is coming down every week because of this and we have raised an Oracle Support Ticket also, but that is taking its own time.
We have set our parameters as below
-jrockit -Xms1024m -Xmx1024m -XXlargeObjectLimit=128k -XXtlaSize:min=128k,preferred=512k
However now every week in all the environments (including DEV) we are getting "java.lang.OutOfMemoryError: nativeGetNewTLA" issues.
IMHO tuning the JVM - apart bumping up the heap size - rarely helps in solving memory issues. With Java 6 the JVM is already self-tuning for most scenarios.
What really you should do when you have a OOM problem is to take a heap dump (-XX+HeapDumpOnOutOfMemory) and analyze it to understand which objects (normally there is only 1 class of object responsible for the trouble) are taking up most heap space. Use yourkit, eclipse mat, visualvm or any other tool you like. in any case, you need tools, you will get nowhere using only your bare hands.
This is probably due to the fragmentation of the heap.
As more and more garbage collection are run, the heap gets fragmented, which can result that
large objects cannot be allocated. Usually, after each full garbage collection a compaction is run, i.e.,
moving objects in order to create more contiguous space.
How is your JVM tuned, are you using the pausetime or deterministic scheme?
More information can be found here: http://middlewaremagic.com/weblogic/?p=7083