hi, i need some help relation to jms, recently i have a queue in clusterA but i need to share it with clusterB. The problem is that when i change the subdeployment and target to add the clusterB it says that the queue must be in one cluster. How can i resolve this to share it to the other cluster too?
Should i need to use sfa or bridge? Which would be the best to use in this case?
Can you be more specific on what do you mean by "share" a destination? If you mean a destination that can be accessed by both clusters, you can configure the destination in one cluster and remotely receive/send messages from another cluster, or send messages from an application in one cluster and consume those messages by an application from another cluster.
Thanks for your time, both cluster must access the same queue. Actually my testing are that cluster a have 4 managed server and have migratable target with the sample queue called msgStatusQ. But i have another cluster with other 4 managed server that have another application that need to use the messages of the queue msgStatusQ. In the 8 managed server i have configured in each the jms server and migratable targets.
The problem is when i tried to target the subdeployment used in the jms server of the cluster a with the jms server of the cluster b to make the possibility to use the queue in the managed server of this cluster. What is your recomendation to make this possible? Both cluster are in the same domain.
There is a difference between "applications running in two clusters access the same destination" and "a destination is hosted/running on both clusters".
As far as I can tell, your requirement is the former. In this case, your destination only need to physically exist in one cluster; say cluster1, but can be accessed from applications on both clusters. In order to achieve this, you only need to have the JMS servers in cluster1 in your subdeployment of the destination. As a result, the destination exists only in cluster1. Applications from cluster2 can just access the destination by using the urls of the servers in cluster1 when they create initial context to the server.
If you DO want to have the destination hosted on both clusters, you will have to create 2 different destinations that are targeted separately. They will be treated as different destinations, say no load balancing across the cluster boundary for example. Depends which cluster your application establish their initial context with, your application will access different destinations.
Thanks again!!, i am implementing as you said using the queue that is physically in cluster a. My doubt is the following: cluster a have the destination of the udq to all their managed server (4). But if in the application of cluster b i put in their code the initial context of one managed server of cluster a it will not consume from the other managed server. Do i need to make a loop to consume from all managed server or as their is udq make automatically a ladbalancing for consumption? Or is thier an url t3 that include all jms cluster?
I would suggest that you use WebLogic MDB (Message Driven Bean) to consume from a distributed queue. There is a built-in logic in WebLogic container to make sure that all members of a remote UDQ are serviced by an MDB mean instance, even in the event of members coming and going, migration or server failures.
For details of WebLogic MDB, please refer to the following documentation.
I was going through this thread and could understand that you have UDQ on cluster-1 (with 4 MS).. also you have 4 JMS-server mapped with 4 managed server instances. The producer application is deployed on cluster-1 which will send messages on UDQ.. which weblogic will distribute using the load balancing algorithms on the underlying JMS system i.e, to the 4 JMS server. Now I understand that you have another cluster-2 on same domain with 4 managed servers. Also cluster-2 hosts the consumer application to access the messages produced by producer. Your consumer application just need to create a JMS connection to the UDQ using connection factory config.. weblogic will itself map your 4 consumers (as app is deployed on cluster with 4 MS) to the 4 JMS servers of cluster1 . I don't see any requirement of pointing your UDQ to cluster-2. For type of consumer application to be used.. as already suggested by others it's best to use MDB which will be listening on all incoming messages on the queue.
As option 2 if you want to go with JMS bridge , then create 1 more UDQ to point it to cluster-2 (u will also require another JMS module which will point on cluster-2). The create a JMS bridge and add UDQ on cluster-1 as source destination and UDQ on cluster-2 as target destination. This way your application deployed on cluster-1 can put messages on source destination of JMS bridge and your say MDB application deployed on cluster-2 can access those messages over target destination of JMS bridge.