Im having following replication scheme:
M1,M2 and M3 are multimaster replicated datastores.
These three datastores will replicate its data to a node which is going to act as a Propagator.
The Propagator will then replicate data to a set of subscribers.
Can we configure the Propagator to be redundant.
ie, can we configure an additional propagator which will act as redundant and replicate to the same set of subscribers?
While it is possible to configure this topology (two propagators) using legacy replication (I just verified it) it will *not* operate correctly even under normal conditions. And recovery after a failure will be quite a nightmare (even with a single propagator). I really would recommend you to not even start down this road. Anythign involving multi-master is always very complex and problematic and best avoided.
Application uses timesten and the database topology can be elaborated as below.
Each server has 1+1+1 configuration.
Presently we are following table level replication configuration. Repschemes is same in all the three nodes. Replication is configured between these three nodes [ Each table will have three replication elements, each replication element has one server node as master and other two server nodes as subscriber ] so update happening in one node will be available in other two server nodes. At any given point of time, any one of these server node will be Active and others will be standby. Active Stand By is controlled by application and so application will make sure that update happens in any one of the node and hence there are almost no chance of conflict in updates between these three nodes. If any of these three node goes to replication failed state due to any reason, it will duplicate from the active node[ Active node is maintained in application table so the IP of the active node will be used to duplicate from].
Also these server [ 1 + 1 + 1] has to replicate the data to 'N' Number of other servers which will be subscribers for those tables. Those N Number of servers will be read only and no update is possible as they are not defined as masters but as subscribers only. If any of the N Subscriber fails it will sync from the Active Master node.
As N value increases, then it will be difficult for the master to propogate the change to its peer ie other two nodes and to all N read only subscriber and also timesten has a limitation on the number of subscribers for a given master element. So we are planning to have a propagator in between master and N read only subscriber so that master will replicate to its peer ie other two nodes and to the propagator and the propagator in turn will replicate to the read only subscribers.
When we tried with propagator in between master and subscriber. any update happening in master is propagated to the propogator which in turn is propagating to all subscribers. We want to achieve redundancy for this propagator so that if one propagator fails another should propagate the change to all the subscribers. When we tried using replication element configuration all the changes done in master is going to both propagators and both propagator is replicating to all the subscribers, which is not required. A thought was given in Active Stand by for the propogator but its not possible as timesten is not allowing to have replication element configuration and Active Stand by configuration in the same Datastore.
We need 1+1 or 1+1+1 propagator which will get updates from any of the Master[ 1+1+1 ] node and at a given point of time only one propogator should send the updates to the read only subscribers and also data consistency should be achieved between Masters and all its N read only subscribers.
How to achieve this using timesten replication?? Please advise.
Thanks for elaborating! The problem, as you have noted, is that if you configure multiple propagators with the same common subscriber list then all the propagators replicate to all the subscribers and this causes various problems. So it is not really possible to use multiple propagators for the same subscriber(s). The only kind of redundancy possible is to have multiple propagators each with a different set of subscribers. Then if a propagator fails replication is only interrupted to a subset of the subscribers.
A subset of subscribers getting interrupted because of a single propagator failure will not be acceptable in our application. Because services of all servers in that particular subset of the subscribers will be affected.
Is there any way to configure Propagator as One Active and two stand by nodes in replication wise so that stand by nodes will not propgate the changes to its subscribers even though those subscribers are configured in the propagator but will be receiving the updates from the master.
No, that is not possible. What exactly are your overall specific requirements here for both availability and data consistency across all these nodes? Given your current setup and the way you are looking to extend it it is clear that you do not need complete point in time consistency across all nodes (since you do not have that now and would not have it with the proposed extended setup. If a the propagator failed then the subscribers would still be up and queryable but they would just be slowly (or quickly) gettign more and more out of sync with the 3 masters. I don't believe there is any feasible topology that satisfies what you are asking for.
Can you please explain what you mean in your statement "since you do not have that now and would not have it with the proposed extended setup".
We are expecting all subscribers to be in sync with the Active Master[ Active from application point of view ] at any point of time.
You are using asynchronous replication. Hence while there is a workload on the system at any given point in time all the databases will be slightly different to each other. If you suspend the workload, and assuming all the databases are up and communicating then they will quickly reach consistency with each other. Adding in propagators and/or remote subscribers will just increase this effect throughout the system. The propagators, and the remote subscribers will of course all be 'behind' the active master(s) and so will not be consistent with them, or with each other, at any moment in time while there is a workload on the system. Again, when the workload is suspended consistency will be reached after a short time provided everything is up and running.
These are the normal operational characteristics of any form of asynchronous replication (it's not specific to TimesTen).