I am using WebLogic Server Version 10.3.4.0. I got 3 applications deployed in this server, two of which developed by ADF technology and the other one by GWT. I start the server with 10GB of RAM assigned to it and after a while (sometimes one day, some other times 5 hours) the weblogic server causes the use of the CPU to reach to 100% and as a result the server sort of doesn't response anymore. The number of users is not very large, maybe 10~20 users for each application. I have no idea what is the cause of this, configuration of wl server or one of the applications. How could I find out? Do you have any idea?
Anyway, my system info is as follows:
CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 4 cores
OS: Red Hat Enterprise Linux Server release 5.7, x86_64, 2.6.18
Thank you all for the replies, they were very helpful and I actually used every single one of them to find out the following:
Part of the "ps –ef | java" command’s output shows that I am using jrockit-jdk1.6.0_20-R28.1.0-4.0.1 and -Xms10240m, -Xmx10240m options are set.
Using "top –H –p 3889" command (3889 is the process id obtained from the previous "ps –ef | java" command) shows that 4 "GC Worker Thread" process is run for each CPU core and they take 100% of the CPU time.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3895 root 25 0 13.1g 11g 21m R 99.9 38.1 15:54.95 (GC Worker Thre
3896 root 25 0 13.1g 11g 21m R 99.9 38.1 15:50.55 (GC Worker Thre
3893 root 25 0 13.1g 11g 21m R 98.6 38.1 15:42.09 (GC Worker Thre
3894 root 19 0 13.1g 11g 21m R 98.6 38.1 15:43.10 (GC Worker Thre
3899 root 20 4 13.1g 11g 21m S 1.3 38.1 7:07.59 (VM Periodic Ta
3889 root 18 0 13.1g 11g 21m S 0.0 38.1 0:00.00 java
3890 root 16 0 13.1g 11g 21m S 0.0 38.1 0:04.96 java
3891 root 16 0 13.1g 11g 21m S 0.0 38.1 0:00.12 (Signal Handler
3892 root 15 0 13.1g 11g 21m S 0.0 38.1 0:17.36 (OC Main Thread
I used JRockit Mission Control to see what happens. It turns out that after a while the 10GB memory gets occupied, GC starts to work to make room for the app but it could free up memory just about 1GB and the memory gets occupied soon and GC comes into play again and this process gets repeated again and again and it keeps the CPU usage near 100%. What is wrong? Is it because of memory leaks? How could I find out which application (and which part in the application) causes this? I am stuck and I really appreciate your help.
The following is part of Thread Stack Dump of the server:
This page displays the current stacks for each thread.
The stacktrace shows that the thread 3908 is stuck waiting for the reply from the database server.
A couple of things :
1. Check the database and network performance.
2. Which GC strategy are you using?
You can use Jrockit memory leak detector to check any memory leaks in your code.
You are right buddy, this is the line where the thread is getting stuck. And there is nothing wrong with the statement you gave.
If you follow the stacktrace, it is waiting for T4CPreparedStatement.fetch over network.
In general case scenario, ViewObject.createRow should not cause stuck threads. I will recommend you to check for application performance tuning, mainly AppModule tuning. Check this link :
Thank you very much Ganesh, I am going to check the AM tuning as you suggested but one more question. How could you get the fact that T4CPreparedStatement.fetch is the cause of problem? I mean when I follow the stacktrace to the top (starting from ViewObjectImpl.createRow) I could see lines such as:
^-- Holding lock: oracle/jbo/JboSyncLock@0x2ca5f7460[thin lock]
or this one
^-- Holding lock: oracle/jdbc/driver/T4CConnection@0x285c6cfe0[thin lock]
but for me there is nothing wrong with this line
From the stacktrace it is visible that the thread is stuck in createRow() method. This method is trying to fetch some data from DB with T4CPreparedStatement.fetch() this call. Going deeper, you can say the thread is stuck while fetching some data which is not something very often. So this means, there is seriously something wrong with either the connection or the tuning of application or database.