We are currently using Cache Coordination among a cluster 5 JVMs sending changesets. We are using the RMI transport. I remember a while back that we asked Oracle about the limits of cache sync with RMI and they said that 5 JVMs was the recommended limit and use JMS transport for more scalability.
Is that 5 JVM limit using RMI still true?
Can you explain why using JMS for the transport scales better and what are the limits of that?
I'm assuming at some point with a large number of JVMs in a cluster with a large number of cache coordination changeset messages that the JVMs will be bogged down in locking the identity maps with all of the updates. True?
Thanks for you help.
I have not heard of a hard limit on JVMs, but at some point it makes more sense to switch to JMS just because of the way it works. RMI requires connection to each remote JVM and sending changes to each individually, while JMS only requires sending changes to a single point. So each additional node in the cluster requires each node to create 2 more connections with RMI, while it changes nothing if using JMS.
This really only factors in the sending of messages. On the receiving side, there is only one JMS listener instead of multiple RMI connections, but the results should be similar. So if the amount of changes is causing the the identity maps to be bogged down with all the updates, you might want to evaluate swithing to just invalidating objects instead of sending complete changes, or using something other than cache synch entirely. For some entities being updated quite a bit throughout the cluster, it might be better to turn off cache synch for those objects and use an isolated caches for them instead.
Some information an tips are contained here:
Your response validated my thinking about this. If I where to start over again, I would design the app to not require cache sync at all. As you mention JMS will help with minimizing 'connections' but we'll still run into the locking issue. We plan to play around with isolated caches and other cache tuning in the meantime.