Require one info on Cluster ENV on Weblogic (10.3.2).
We have Two node Cluster ENV and OIM application running on it. As per concept Admin Server Should Be Running on ONe Node only.
Our Configuration : Servers one and two are Physical servers
Server One : Admin Server and Managed Server
Server Two : Managed Server
Once We Check the Check the Status of Deployments from Admin Server One , Its active.
Navigation : Open the Console, Click on servers , On RHS Under Domain Deployments, Checked Active State.
Click on Managed Server1 , Deployments Tab , Its Active and done for the Managed Server2.
Just for testing , We bring the admin server on other Server ( Server 2).
Now if we check the status of deplyoments in Server 2 Admin Console , Its New/Installed.
Is this the corrrect Behavior ?
If we check from the first Admin Server , Its fine ( Deployments are in the active state)
But if we check after bringing the second admin server ( first Admin Server is still up , managed servers 1 & 2 are up , we haven't brought down anything ) , Deployments states are new/ Installed from the console of second Admin Server.
You can only have one Admin Server running on a domain. See the text below "It is up to you to make sure that you don’t have more than one admin server running at a time"
"The admin server is a WebLogic Server instance that runs some special
administrative applications like the WebLogic Console. Through these applications, the admin
server maintains a repository of configuration information for the domain, acts as a centralized location
for application deployment, and provides a browser-based administrative console application
that the administrator uses to configure, manage, and monitor all aspects of the domain."
"The last thing we want to discuss is how to handle admin server availability because the admin server
is not currently clusterable. This means that if the admin server goes down, you cannot administer your
WebLogic Server domain until you bring it back up. In most cases, you may not be too concerned if the
admin server goes down because all you need to do is restart it. If you use the node manager to start
the admin server, the node manager can automatically restart a failed admin server just like it can any
other server. What happens if the machine where the admin server runs fails in such a way that you
cannot restart the admin server? The answer is simple if you prepare for this unlikely event.
Proper operation of the admin server relies on several configuration files and any application files it
controls. Typically, the best thing to do is to store the admin server’s directory tree on a shared disk. As
long as the configuration and application files are accessible, you can restart the admin server on another
machine. It is up to you to make sure that you don’t have more than one admin server running at a time.
If the new machine can assume the original admin server’s Listen Address (or if it was not set), you
can simply start the admin server on the new machine without any configuration changes. Otherwise,
you will need to change the admin server’s Listen Address. Since the managed servers ping the admin
server URL every 10 seconds until it comes back up, you need to devise a way for the admin server URL
to allow the managed server to find the restarted admin server on the new IP address. The easiest way
to achieve that is using a DNS name that maps to both IP addresses, or better yet that is dynamically
updated to point to the correct location of the admin server. If this is a graceful shutdown and migration,
use the WebLogic Console to change the Listen Address just before shutting down the admin server. If
not, you will need to edit the config.xml file by hand to replace the old Listen Address with the new
one. Typically, we recommend planning ahead so that everything you need is already in place to make
admin server failover as painless as possible."
Thanks. So Its a normal Behavior if we bring up the second Admin Server , We Can see the deployments in New/Installed State.
I Agree that there should be only ONe Admin Server in Cluster Env. Just Want To UNder Stand this.
One thing I want to tell you Since Its a Test Instance So we Bring the Admin Server Up on the Other Node ( Server 2).
Server 1 : Admin Server and Managed Server
Server 2 : Managed Server
Now What we did, Bring down all the services ( Admin And Managed Servers). Bring up the Admin Server on Server 2 first and then on Server 1.
Server 1 : Admin Server and Managed Server
Server 2 : Admin Server and Managed Server
Now If we check the deployment status from both the Admin Server Console ( Server 1 and 2) . Its in Active State.
Initially as I mentioned If we check from Server 2 deployment status, its in New/Installed state.