This content has been marked as final. Show 8 replies
check your jvm memory and set.1 person found this helpful
Below setting is for tomcat,
Add -Xms1024m -Xmx1024m -XX:MaxPermSize=128m into catalina.bat in set JAVA_OPTS part
Like: for windows and for unix please check catalina.sh and incease memory in JAVA_OPTS
set JAVA_OPTS=%JAVA_OPTS% -Xms1024m -Xmx1024m -XX:MaxPermSize=128m
I will test this.
The JVM is already set to 1024MB.
Another idea ?
That's rather low memory settings really, but more importantly, how much physical memory do you actually have on your server. If it is running on decent hardware with a decent OS, you could increase the heap space allocation to a few gigs but don't go overboard or you will pay a performance hit during garbage collection.1 person found this helpful
try 2GB first and then increase gradually
set JAVA_OPTS=%JAVA_OPTS% -Xms2048m -Xmx2048m -XX:MaxPermSize=512m
did you do an update/upgrade lately?
If you did and are running Oracle Waveset 188.8.131.52 or newer you might want to set these two parameters in Waveset.properties:
I've seen an "OutOfMemoryError" in one environment and a new patch level. Setting the above two parameters to "false" solved the issue.
With 2go the JVM, there's not java OutOfMemory Error.
But now, the task run 20 minutes and update users. But after 20 miutes, nothing happens when there are still accounts to update.
Any idea ? Timeout, Time limit, provisioning limit ??
OutOfMemoryErrors also happen a lot when you are opening a lot of network connections, at a much higher rate than closing them. It usually happens when you reached a resource or network limit on the resource side and it refuses to give you more connections, blacklist you for a period for attempting too many connections. However, Waveset resource connections unless you have configured it obey certain limits will just try to continuously connect to the resource and attempt it again after each failed attempt until it runs out of memory and dies.
To fix this situation you can do two things:
- ensure that the resource you are connecting to is able to handle the amount of requests, and add a limit to the connector to prevent it from opening too many connections and also increase the retry timeout so it doesn't hit the resource every 5 seconds (default).
- increase the memory allocated to the JVM. This is required because the default allocation works for very small number of users/requests so it fine in dev environments, but for real use where you have a lot more users and things to process, you will need a lot of additional memory.
I agree with handat. Originally our heap was set at 1GB, but frequently ran out of memory. After bumping to 2GB, we've never had that issue. It was always related to reconciliation tasks which are looking at 20,000+ accounts.