9 Replies Latest reply on Nov 10, 2012 2:25 AM by Marco Milo-Oracle

    Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment


      So we have 2 master 11g servers sitting in one location.. Let them be A and B.. changes are written to them

      We have another two servers C and D in another location. Their use is basically to allow few users to read off it).. Now what is the best setup for this?


      1) C and D have to replicate between each other for redundancy.
      2) A and B DO replicate with each other but not shown below.. So (MASTER A) <--> (MASTER B) is always the case but not shown below just for ease
      3) We want A to connect to C for one way replication and B to D for another.. So If A goes down.. Or B.. C and D are still getting updates..
      4) Also A and B HAVE to be MASTER
      5) If possible People should not be able to write to C and D

      What is the best scenario? Or any other if possible and best ?

      1) (Master A) --> (HUB C) < -- > (HUB D) <-- (Master B)

      2) (Master A) --> (CONSUMER C) < -- > (CONSUMER D) <-- (Master B)

      3) (Master A) --> (HUB C) < -- > (CONSUMER D) <-- (Master B)

      Dont know if they are possible or not..

      Edited by: user13488351 on Nov 2, 2012 12:37 AM
        • 1. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
          Marco Milo-Oracle
          the Multi-Master replication is enabled and supported ONLY between MASTER Directory Server instances. Moreover since you have the requirement that only A and B have to be the masters, then the base topology would be:

          Master A <--MMR--> Master B

          This could be then extended with the two 'Slave' (or Consumer) instances C and D that can receive updates from both A and B, so you will have other four additional replication agreements:

          Master A MS> Consumer C
          Master A MS> Consumer D
          Master B MS> Consumer C
          Master B MS> Consumer D

          This topology will provide you the highest degree of redundancy and availability:

          - During normal operations Master A and Master B both sends updates to Consumer C and Consumer D
          - If one of the masters is down (i.e. Master A), then the other one (Master B) will keep getting the 'writes' from client applications and will be sending the updates to both Consumer C and Consumer D


          P.S.: Note that by the default MMR replication design, if a 'client' application is trying to perform a 'write' against a Consumer, it will get a 'referral' (for further information on referrals, please see http://tools.ietf.org/html/rfc4511#section-4.1.10 ) ; and so if the client is 'smart enough' and is able to 'reach' the masters (get a network connection with them) it will try to perform the write operation there.
          1 person found this helpful
          • 2. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
            Aah.. Awesome suggestion.. I understand now I think...

            So I should

            1) Make C and D consumers

            2) Replicate Master A to C and D

            3) Replicate Master B to C and D

            4) DO NOT replicate C and D.. ??

            this is the best design?

            Also in this case.. Will anyone be able to write to C and D which will get referred back to A or B and get written there?.. we don't want that.. but I guess that won't happen if they dont have DirMan password as ACI restricts it..


            A < -- > B (both ways MMR)

            A -- > C (ONE way Master to Consumer Replication only?)


            A --> D
            B --> C
            B --> D
            • 3. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
              Marco Milo-Oracle
              The question is that you cannot 'send' replication informations FROM a Consumer to another Consumer (you would need at least an hub); so to initialize the topology this is what you could do:
              1) Setup Master A and Master B in MultiMaster Replication (Master A <--MMR--> Master B)
              2) Create Consumer C and Consumer D
              3) Create the following replication agreements:
              Master A ---> Consumer C
              Master A ---> Consumer D
              Master B ---> Consumer C
              Master B ---> Consumer D
              4) Initialize the replication agreement from Master A to Consumer C (either online or offline)
              5) Initialize the replication agreement from Master A to Consumer D (either online or offline)

              This is all you need to do, since you initialized C and D from A, there's no need of further initialization also from B; the replication protocol will take care of everything to keep the topology in sync.

              • 4. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                Thanks will do that..

                Couple of more questions related to this.. purely design wise .. getting some info here..

                Hypothetically can we do a HUB C < -- > HUB D replications (bidirectional replication between two HUBS ) ?

                If yes

                Does this make sense ? (again MASTER A AND B are MMR)

                MASTER A -- > HUB C < -- > HUB D < -- MASTER B ?

                and say if we have 4 Master MMR .. 1 2 3 4

                Does this make sense

                MASTER 1 --> CONSUMER C
                MASTER 2 --> CONSUMER D
                MASTER 3 --> CONSUMER C
                MASTER 4 --> CONSUMER D


                Or make C and D consumers HUB on top and do the same

                Just trying various scenarios and seeing what fits best..
                • 5. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                  Marco Milo-Oracle
                  In 11g (but since from 6.x times) the limitation of 4 masters for the MMR scenario has been removed, so now you can have even an "All Masters" topology, and this makes the HUBs instances obsolete. I cannot tell which would be the best solution/approach for your deployment, as it strongly depends from your 'boundary conditions' (specific business case, technical requirements, performance constraints, HW, geographical redundancy, etc.), that may influence the final deployment architecture.

                  Anyway, getting back to your proposed MultiMaster Rreplication scenario: MASTER A -- > HUB C < -- > HUB D < -- MASTER B, even though with some tricks and tweaks it could be done, I would personally never go in that direction. It has 2 isolated single point of failure: what happens if, for instance, HUB C goes down? How the 'replication information' from Master A is sent to Master B and vice-versa? It could easily end up in a 'split brain' situation where the coherence of the data may not be guaranteed, specially if you have slave replicas directly connected with Master A and Master B.
                  In what I believe would be a reliable MMR topology (maybe with MANY master instances geographically dispersed) each master should be directly connected at least with other two masters to avoid the aforementioned scenario of split brain or locked replication

                  If you have four masters and two consumers, you could choose a 'fully meshed' replication in which all masters are in replication between each other, plus all the masters are sending updates to the consumers; or if you have network bandwidth limitations/constraints, then you could use a 'partially meshed' scenario where each master is sending updates only to to other two masters and one consumer, something like:

                  MASTER1 <--MMR--> MASTER2
                  MASTER1 <--MMR--> MASTER3
                  MASTER2 <--MMR--> MASTER4
                  MASTER3 <--MMR--> MASTER4


                  MASTER1 ---> CONSUMER1
                  MASTER2 ---> CONSUMER2
                  MASTER3 ---> CONSUMER3
                  MASTER4 ---> CONSUMER1

                  But this is just an idea, as I told before, most of the choices are influenced from your real life situation, or if you would like to take also DPs (Directory Proxy Server instances) in your deployment architecture.

                  • 6. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                    I'll go out on a limb and say that there are very few really good reasons to deploy hubs any more. Even dedicated consumers have become a thing of the past, though it's easier to conceive of reasons to have them.

                    Rather than taking your old style topology and asking why to change it, I would start with a fully-meshed, all-master topology and ask why go with anything different. There are lots of reasons to use a fully-meshed, all-master topology, including:

                    - Simplified topology
                    - Uniform server configuration
                    - Easier HA / Redundancy scenarios

                    So, why can't you go with a fully-meshed, all-master topology?
                    • 7. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                      John Prince
                      Hi Chris,

                      Nice suggestion, since Masters are no longer a scarce commodity nowadays.

                      How do you technically put Master to act as Consumer? Should we put this Master on "Read" Mode.

                      • 8. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                        I guess it depends on which "consumer" features you want. If the requirement is that certain clients cannot write to the Directory, this is best accomplished using ACIs, since a write to a consumer will result in a referral, and it's harder to control the behavior of clients in response to referrals than it is to simply guarantee that a certain client cannot update the data.
                        • 9. Re: Suggest a Master / Hub / Consumer Plan for ODSEE 11G Deployment
                          Marco Milo-Oracle
                          If you can 'afford' the price of a slightly increased architectural complexity, another interesting option would be implementing the DPS (Directory Proxy Server) with specialized data views.

                          With such an architecture, you can obtain a much fine grained control on what a clients can do (depending on the bind, source address, operation type, etc...)

                          my $0.02