3 Replies Latest reply on May 14, 2014 10:58 AM by Tom321

    Weak Start Dependency between RAC instance and VIP when using Server Pools


      In RAC, a Database Instance on a node requires a VIP to also be running on that node, otherwise the Instance will not start. This is known in the 11gR2 RAC Admin Guide as strong start dependency.


      However I now believe that with Server Pools and Policy Managed Databases introduced in 11gR2, there is now weak start dependency i.e. that the RAC Instance on a node can start even if the VIP is not started on that same node.


      Q1. Is this because the VIP could be running on another node in the Server Pool for the VIP ?


      Q2. So the VIP could be assigned to a Server Pool that has different nodes than the Server Pool assigned to the RAC Database - is that correct ?


      Q3. I presume it is not saying that the RAC Instance does not need a VIP before starting, rather what it is saying, is that the required VIP for that Instance may not necessarily be running on the same node as that Instance ?


      Trying to get my head around that ! As in the past the only reason a VIP wouldn’t be running on the intended node, would be if the node failed and the VIP had been inherited by another node. In that case there were be 2 VIPs on the same node being serviced by the same Instance.


      Q4. Now what we appear to be saying is that a VIP and associated Instance could be on separate nodes – is that correct ?


      any advice greatly appreciated,

      thanks Jim

        • 1. Re: Weak Start Dependency between RAC instance and VIP when using Server Pools

          Hi Jim,


          The easy way would be to check what is actually configured in the cluster. As Grid owner:


          Display all resources

          crsctl stat res -t


          Then check the VIP, Listener and a sample DB, eg:


          crsctl stat res ora.rac-prod1.vip -p

          crsctl stat res ora.LISTENER.lsnr -p

          crsctl stat res ora.prod.db -p


          That will give you an detailed output of the properties configured including the Start and Stop Dependencies.


          Generally a database does NOT depend on the listener. It will start without it, but of course the users wont' be able to access it. You should find a weak dependancy for the listener and nothing for VIP.

          The node listener will be dependant on the VIP up and running, you should find a hard dependancy on the VIP here.




          • 2. Re: Weak Start Dependency between RAC instance and VIP when using Server Pools

            Thanks Tom,


            Gee tough topic ! Page 3-2 of the 11gR2 RAC Admin & Deployment Guide, states the dependency is between the Database and the VIP ( as opposed to the Listener ).

            >> When the Database Instance on a node starts it tries to start the VIP. With Strong Dependency if the VIP is not running the Database Instance will not start, with Weak Dependency the Database Instance will start.


            However From the Start Dependencies shown via crsctl comands you listed, this is shown to not strictly be true since the Database resource is dependent on the Listener resource ( and ASM Disk Groups ) not the VIP itself. The Listener resource in turn is dependent on the VIP resource. So the Database is only indirectly dependent on the VIP 


            In my case I had created my RAC Database to be Administrator Managed, so without converting it to Policy Managed, I have no means to examine how the Server Pools interact.


            Normally I would expect each node to have 3 associated resources running i.e. the VIP, the Listener and the Database Instance. However it would seem that with each of these being able to be assigned to a Server Pool in theory these 3 associated resource could now be running on 3 completly different nodes !


            Q1. Do you know if this situation would even be possible ? Not sure how that would work since I know –

            ·    - Multiple VIPs can indeed run on the same node ( we know this from VIP failover examples )

            ·    - However a node can only run a single Instance of a RAC Database

            ·    - Each node has a listener anyway


            I guess I am trying to get my head round the fact that Server Pooling now seems to allow for Cluster Resources to be located on any node of Clusterwares’ choosing and what was traditionally all the Resources that was running on one node may no longer be on just 1 node i.e. it was always true that you knew a node would be holding a VIP, Listener and Instance – however is this still true under the use of Server Pools,could these 3 be running on different nodes from each other ?


            Q2. So if I had what would normally be a 3 node cluster, does that mean that with Server Pools, I could potentially be looking at the 3 VIP’s, 3 Listeners, 3 Instances possibly all running on different servers i.e. my 3 nodes effectively at worse running over 9 !


            Q3. Is it also possible depending on how the Server Pools are set up for each Resource, that you could be looking at some of the nodes holding similar resources(even though it is not a failover situation ) eg.


            Node 1 holding - VIP1, VIP2,Instance1

            Node 2 holding - VIP3, Listener1, Listener3, Instance3

            Node 3 holding - Listener2, Instance2


            Q4. One other thing I did notice is that the Listener does not appear to be assigned to Server Pool i.e. the following shows not Server Pool attribute / assignment

            crsctl stat res ora.LISTENER.lsnr –p


            I presume with being a Cluster Resource, that the Listener must be assigned a Server Pool ? So why is it not showing up ?




            • 3. Re: Weak Start Dependency between RAC instance and VIP when using Server Pools

              Hi Jim,


              here is a pretty good example on server pools:

              An Introduction to 11.2 RAC Server Pools – All Things Oracle


              It is meant as a feature for really large rac clusters, that you divide in pools. And for each pool you name the amount of servers you want to have and the candidate servers. The clusterware then chooses on which server your instances run.


              Neverless even in that case you should have 1 VIP and 1 Node Listener per rac node. Having 2 VIPs on one node is only for failover cases and a node without a VIP can't support a listener running on that VIP. So the case in your Q3 won't happen even under a server pool.


              You can play arround with relocating resources in rac on a test system with:

              srvctl relocate -h


              the default pool in 11g:

              srvctl config serverpool