5 Replies Latest reply: Jun 19, 2009 7:14 AM by Bjoern Rost RSS

    Always ODD number of Voting disks.

      I would like to know answer for a basic question like.....Why we configure always ODD number of voting disks in RAC environment??

      Also one more question....Why Oracle has not provided Shared Undo tablespace between nodes?? Why we need to configure seperate Undo tablespaces and Redo log files for each node??

      Please clear my doubts if possible...as these basic questions are creating lot of inconvinience to me..

        • 1. Re: Always ODD number of Voting disks.
          Bjoern Rost
          The odd number of voting disks is important because in order to determine survival, the clusterware has to decide which instances should be evicted. If you had an even number of voting disks (2) then it wouldn't help much if after/during a split each instance can still access only one voting disk - in that case no instance could be more significant than the other, no way to determine which side of the split brain is the 'right' one.

          As for redo: since there is a LGWR process running on each instance, it is just much easier (and performs better) to just give each instance (and LGWR) its own set of redo files. No worries about who can write to these files at any given time. The only other solutions would have been to either
          - transfer to contents of the redo log buffer between instances and have one LGWR be the only one to physically write files (sounds like a lot of work to me)
          - synchronize access to one shared set of redo log files (a lot of locking an overhead)

          I can't think of a reason why each instance needs its own undo tablespace. But I am sure it makes a lot of sense. Maybe contention when looking for free space? Maybe the fact that an active undo segment is always tied a a specific session on that instance?

          • 2. Re: Always ODD number of Voting disks.
            Surachart Opun
            About Voting Disk
            The way voting disk multiplexing is implemented forces you to have at least three voting disks. To avoid a single point of failure, your multiplexed voting disk should be located on physically independent storage devices with a predictable load well below saturation.
            You can have up to 32 voting disks,
            but use the following formula to determine the number of voting disks you should use: v = f*2+1, where v is the number of voting disks, and f is the number of disk failures you want to survive.
            • 3. Re: Always ODD number of Voting disks.
              Hi Yasser,

              To help u in first question look in Split Brain Thinking

              The second and third questions Re: Why DML not failed over in TAF??


              Rodrigo Mufalani
              • 4. Re: Always ODD number of Voting disks.
                Thanks for replying my basic questions..

                But i didnt get for Odd number of Voting disks requirement.

                Just for understanding purpose imagine below assumptions:

                I have 3 voting disks as mentioned below for 2 node RAC setup:
                First voting disk: /voting/vote_disk1
                Second voting disk: /voting/vote_disk2
                Third voting disk: /voting/vote_disk3

                Now i assume heartbeat occurs from both nodes by writing vote to all these 3 voting disks...

                For some reason shared storage fails(Means shared /voting mount point will be not accessible for both nodes.Hence no heart beat can be performed by both node)

                Now how do Clusterware resolve this situation by using heartbeat information in Voting disks??

                As mentioned by you in earlier post
                If you had an even number of voting disks (2) then it wouldn't help much if after/during a split each instance can still access only one voting disk
                How come in above case each instance can access only one or two disks....as whole shared disks is not accessible for both nodes??

                By the way sorry for asking this basic question...Please correct my concept if i am wrong in any way..

                • 5. Re: Always ODD number of Voting disks.
                  Bjoern Rost
                  First of all the regular heartbeat is done via the interconnect. And the whole point of having more than one voting disk is to put them on different disks/volumes/san arrays so your cluster can survive the loss of one disk/volume/array. If all your rac data resides on a single disk/lun/array and it fails, your cluster won't be of much use anyway and there is no point in setting up multiple voting disks (except when you really deal with single disks without raid).

                  I'd only do this (set up more than 1 VD) when you have more than one array and mirror all your data files on both of them (with asm for example) so your database will survive the loss of an array. But with 2 arrays, you would only be able to setup two voting disks. Now that would not work because there could be a strange case where your SAN has some kind of problem where each node can only access one of the arrays. In that case your clusterware wouldn't know which side of the cluster is the 'healthy' one and that is why oracle requires an odd number of voting disks.