This content has been marked as final. Show 4 replies
I suggest that you first verify that they do in fact stay around forever and if not how long they stay around.
If they stay around forever, then your choices are.
- Get an update to the library that fixes that.
- Create a separate process which manages the connections. Provide it with a comm api. You main app manages that app and uses the comm api to get to the functionality you need. Manage it as a pool such that after a while one instance is 'old' and thus your main app kills it.
Otherwise you might investigate whether the intention of the tool was to cache connections. If so keeping connections around is a good thing normally. If you reuse the some target endpoint then perhaps you are using the library wrong. If you use many different endpoints then the tool isn't really ideal for your needs but it should provide a configuration value that controls how long items are in the cache as well as how many. Either of those would allow you to minimize it.
It is not possible to set that particular attribute via system properties, but anyway based on your stack trace it doesn't look as if a shorter timeout would help. If there are thousands of threads with that stack trace then there should also be thousands of threads that are servicing JMX requests. That is my reading of the code at any rate. Otherwise it would seem to be a bug in the JMX implementation, and it would be very interesting to know what JMX requests were issued over those connections before they were abandoned.
Éamonn McManus -- JMX Spec Lead -- [http://weblogs.java.net/blog/emcmanus]
Éamonn, you are right. It looks like setting that timeout does not help. I tracked that down to the client connecting with a JMXConnector but doesn't call JMXConnector.close(). In the client's JVM, I found a large number of threads that look like this:
"Thread-10897" daemon prio=10 tid=0x00002aaac81e8000 nid=0xfc1 waiting on condition [0x00002aad44dd6000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
The number of JMX client checker threads on the client's JVM matches the number of JMX server connection timeout threads in the server's JVM. If I stop my client's JVM, all the JMX server timeout threads disappear in 2 minutes, the default JMX connection timeout. My problem is that the client is part of a webapp that servers other requests so the client's JVM is normally keep up and running all the time. I see only two ways to fix this:
1) fix the client code to call JMXConnector.close()
2) restart the client's JVM periodically before the server crash
Well, you do have another option. You can intercept the creation of a JMXConnector or JMXConnectorServer by providing your own JMXConnector[Server]Provider, which can modify the environment Map before instantiating RMIConnector or RMIConnectorServer directly. That means that you could set "jmx.remote.x.client.connection.check.period" to 0 in that Map when the JMXConnector client is created, so that you turn off the "heartbeat" that is keeping these unclosed client connections alive. Or, you could set "jmx.remote.x.server.connection.timeout" to a small value on the server side, so that these idle connections get closed rapidly. In either case you can do this just by inserting an extra jar in the classpath when the application is launched, and either using a ServiceProvider entry in your jar or system property with a provider package list. This is described in the javadoc for JMXConnectorFactory <http://download.oracle.com/javase/6/docs/api/javax/management/remote/JMXConnectorFactory.html>.