The easy way would be to check what is actually configured in the cluster. As Grid owner:
Display all resources
crsctl stat res -t
Then check the VIP, Listener and a sample DB, eg:
crsctl stat res ora.rac-prod1.vip -p
crsctl stat res ora.LISTENER.lsnr -p
crsctl stat res ora.prod.db -p
That will give you an detailed output of the properties configured including the Start and Stop Dependencies.
Generally a database does NOT depend on the listener. It will start without it, but of course the users wont' be able to access it. You should find a weak dependancy for the listener and nothing for VIP.
The node listener will be dependant on the VIP up and running, you should find a hard dependancy on the VIP here.
Gee tough topic ! Page 3-2 of the 11gR2 RAC Admin & Deployment Guide, states the dependency is between the Database and the VIP ( as opposed to the Listener ).
>> When the Database Instance on a node starts it tries to start the VIP. With Strong Dependency if the VIP is not running the Database Instance will not start, with Weak Dependency the Database Instance will start.
However From the Start Dependencies shown via crsctl comands you listed, this is shown to not strictly be true since the Database resource is dependent on the Listener resource ( and ASM Disk Groups ) not the VIP itself. The Listener resource in turn is dependent on the VIP resource. So the Database is only indirectly dependent on the VIP
In my case I had created my RAC Database to be Administrator Managed, so without converting it to Policy Managed, I have no means to examine how the Server Pools interact.
Normally I would expect each node to have 3 associated resources running i.e. the VIP, the Listener and the Database Instance. However it would seem that with each of these being able to be assigned to a Server Pool in theory these 3 associated resource could now be running on 3 completly different nodes !
Q1. Do you know if this situation would even be possible ? Not sure how that would work since I know –
· - Multiple VIPs can indeed run on the same node ( we know this from VIP failover examples )
· - However a node can only run a single Instance of a RAC Database
· - Each node has a listener anyway
I guess I am trying to get my head round the fact that Server Pooling now seems to allow for Cluster Resources to be located on any node of Clusterwares’ choosing and what was traditionally all the Resources that was running on one node may no longer be on just 1 node i.e. it was always true that you knew a node would be holding a VIP, Listener and Instance – however is this still true under the use of Server Pools,could these 3 be running on different nodes from each other ?
Q2. So if I had what would normally be a 3 node cluster, does that mean that with Server Pools, I could potentially be looking at the 3 VIP’s, 3 Listeners, 3 Instances possibly all running on different servers i.e. my 3 nodes effectively at worse running over 9 !
Q3. Is it also possible depending on how the Server Pools are set up for each Resource, that you could be looking at some of the nodes holding similar resources(even though it is not a failover situation ) eg.
Node 1 holding - VIP1, VIP2,Instance1
Node 2 holding - VIP3, Listener1, Listener3, Instance3
Node 3 holding - Listener2, Instance2
Q4. One other thing I did notice is that the Listener does not appear to be assigned to Server Pool i.e. the following shows not Server Pool attribute / assignment
crsctl stat res ora.LISTENER.lsnr –p
I presume with being a Cluster Resource, that the Listener must be assigned a Server Pool ? So why is it not showing up ?
here is a pretty good example on server pools:
It is meant as a feature for really large rac clusters, that you divide in pools. And for each pool you name the amount of servers you want to have and the candidate servers. The clusterware then chooses on which server your instances run.
Neverless even in that case you should have 1 VIP and 1 Node Listener per rac node. Having 2 VIPs on one node is only for failover cases and a node without a VIP can't support a listener running on that VIP. So the case in your Q3 won't happen even under a server pool.
You can play arround with relocating resources in rac on a test system with:
srvctl relocate -h
the default pool in 11g:
srvctl config serverpool