This content has been marked as final. Show 17 replies
If a WebLogic server is targeted by a Coherence Cluster, at server start-up, operational configuration will be created and a Coherence cluster will be started using that operational configuration. The operational configuration is derived from the settings in the Coherence Cluster and the optionally attached overrride file.
Thanks for the reply, David. Everyone else has been dodging this question. :) But can you clarify your answer a bit?
Suppose Coherence cluster named "COH_CLUSTER" targets WebLogic application server cluster named "APP_CLUSTER".
When you said "at server start-up", did you mean the APP_CLUSTER startup or the COH_CLUSTER startup?
You have to target individual WLS servers not the WLS cluster. By server start-up, I mean the start-up of the WLS managed servers. As each managed server starts, it will check to see if it is targeted by a coherence cluster (CoherenceClusterSystemResourceMBean). If it is, it will generate an operation config from the MBean config combined with the attached file (if any). The code will call CacheFactory.ensureCluster() so that a Coherence cluster service is started for that managed server JVM. Of course, the operational config needs to match the operational config of any stand-alone Coherence cluster JVMs in order for the WLS server's node to join.
Excellent, thanks for the detailed information.
We actually tried this, but it didn't seem to work. Specifically, when our application itself called CacheFactory.ensureCluster(), the logging by Coherence indicated that it used the default Coherence configuration files in the coherence.jar, and it did not load any other override files. We were hoping that it would generate a config file somewhere, corresponding to our Coherence cluster's configuration, and we were hoping that it would add it to the app's classpath, but it did not do this.
So, we had to manually modify the classpath for the application server to add: (1) the directory containing our operational override, and (2) the coherence jar. Only then did our application successfully join our cluster.
How can we verify that WLS is doing what it is supposed to do? We tried inspecting the classpath and the Coherence logs to see if WLS performed what you described, but as I said we did not see any indiciation of this. Maybe this magic is happening somewhere else that I'm not aware of/expecting?
What I described is for server scoped use of Coherence, which requires that you edit your WebLogic Server system classpath to include coherence.jar and WL_HOME/common/deployable-libraries/active-cache.jar in the system classpath. The active-cache.jar should be referenced only from the deployable-libraries folder in the system classpath and should not be copied to any other location.
I haven't tried it myself, but you can also reference those jars from weblogic-application.xml. If you do that, you also have to add a coherence-cluster-ref in weblogic-application.xml to the CoherenceClusterSystemResourceMBean. Again, I have only tried with the system classpath approach.
The actual jar name is active-cache-1.0.jar. It exists only to update the classpath. If you look at its manifest, it includes:
The integration jar contains the code that creates the operational config.
So, to summarize:
The WLS just needs to include active-cache*.jar and coherence*.jar in its classpath.
We do not need to explicitly add the operational config file to the classpath because the active-cache will handle that.
Specifically, when WLS starts up, it will invoke code in ...coherence.integration*.jar (reached via active-cache*.jar) to create the operational override config file (based on the Coherence cluster settings), and somehow (TBD, but I'll take your word for it) the application will detect this config file when it starts to use Coherence.
And we can probably confirm this behavior in the WLS logs when it starts up.
Thanks for your help! I'm marking this question as answered!
Just to be completely clear, you have limited operational settings in the WLS config, but can set anything you want in an external file. There's a place in the admin console to say you want to use a "Custom Cluster Configuration File". That file can be the override file you want to use for the cluster.
Yes, and we may very well need to do that. Coherence provides so many options that aren't exposed in the Admin Console UI. Like configuring POF serialization. Or even as basic as specifying the cache config.
Granted, these can be done via setting Java properties on startup, but sometimes it's nicer to specify these in an XML file, as you said. Of course, we can't do all properties this way, for the reason that some properties are not global and apply only to some servers, e.g. setting localstorage=false, enabling JMX, etc.
Speaking of monitoring... we are supposed to be able to go to Deployments -> appname -> Monitoring -> Coherence and monitor Coherence this way. However, the Coherence cluster is not showing up in the UI, even when we have targeted the WL servers. (But, we haven't added the active-cache jar yet.) Do you happen to know what steps are needed to get this to show up? (Maybe I should start a new thread for this topic...)
BTW, I'm familiar with the various JMX options, including configuring security, and we've even gotten remote JMX working by enabling and configuring remote JMX on one of the WL servers and using JConsole. I was just wondering whether the WL Admin Console monitoring used JMX or something else, and was wondering what the requirements were to get it working.
A couple of pointers.
The Active Cache feature (implemented by the active-cache jar and friends) provides the following:
- The "Coherence Cluster" configuration item that can be targetted to individual servers, and can be referenced by Coherence*Web, TopLink Grid, app code via @Resource or JNDI.
- WLS logging integration.
- Class loader integration for coherence caches and services. This means you can put coherence.jar at the system CL level, but use application classes from child classloaders for EPs, cached objects, ...
- Correct serialisation of RMI objects to and from the cache (but you'd never want to do this).
- Integration of Coherence configuration and runtime Mbeans.
As far as I'm aware, the WLS Coherence MBeans are completely separate to the native Coherence MBeans. WLS provides its monitoring information the console via its CoherenceClusterRuntimeMBean, which gets its limited information through its membership of the Coherence cluster.
You provide an excellent summary of the capabilities and weaknesses of the active cache feature. You are correct that the Coherence MBeans are completely separate from the WLS MBeans. There is no access to the Coherence MBeans from the WLS admin console, so there is no monitoring of Coherence runtime from WebLogic.
If you enable the Coherence management feature through system properties for a WLS server JVM, Coherence will register its MBeans with the platform MBean server, which, by default, is the WLS runtime MBean server. You can then access those Coherence MBeans through JMX clients such as JConsole. Of course, the admin console won't show anything.
Philip: Thanks for the excellent summary of what the Active Cache feature provides. We were actually considering migrating from Hibernate to EclipseLink (previously TopLink). So, if we perform this migration, it appears that the Active Cache will provide similar benefits with regards to configuration.
And thanks also for explaining how there are two distinct sets of MBeans (native Coherence and WLS Coherence); that cleared up some confusion for us.