This content has been marked as final. Show 4 replies
There are different approaches for an LDAP migration, but if you can 'afford' (in terms of 'business') to stop your old instance, export the data and then reload it into the new infrastructure (given that you've already verified the procedure), this is definitely the 'safer and easier' thing to do.
Loading the new data in one of your directory server instances and then re-initializing the other instance from the first one is a correct approach.
Thankyou for the response Milo but you missed the point of my question.
I started wth a non-replicating pair of Odsee directories A and B. Loaded one master (A), built the multi-master replicating agreements and initialised the second master (B). This works well.
Now I want to re-load the replicating master A with current data and re-initialise B. The difference now is that they have been replicating between themselves for a month.
I have two options either to break the multi-master replication agreement before re-load or to keep it.
If I keep the replication agreement and hope for the best then I guess these are the steps.
import latest data into A from .ldif
start B ----- what is gonna happen now since A and B have different data? This is what I wanna know.
Sorry, I'll try to be more explicit in answering your question, but let's try to see a bit closer your procedure:
1) stop B
2) stop A
At this point in time the Directory Servers for the given suffix (let's choose dc=example,dc=com) are 'still' part of the same topology; when you move forward with:
3) import the latest data into A from your ldif file (example.ldif)
you will automatically 'break the topology' (but this doesn't mean anything about the replication agreements, which are kept); because you'll destroy the content of the old database for the suffix dc=example,dc=com and build a new one from scratch (with a new 'replica generation').
At this point, when you:
4) start A
5) start B
you will start getting error on both masters, complaining about the replication. But neither the old data from B will be able to replicate to A, nor the new data from A will be replicated to B automatically over the replication protocol: the replication will be kinda of 'locked'.
At this point having already the replication agreements in place, to 'unlock' the replication, all you've got to do is initialize one master from the other (in our case B from A, because A holds the data you need to replicate). Of course depending on the size of your database, the service outage that you can allow, etc... you can decide which procedure best suits to your scenario... As suggested in:
If the size of your database is 'reasonably' small you can choose the on-line initialization (either via DSCC or via 'dsconf init-repl-dest'), otherwise you may opt for one of the other off-line initialization procedures described in the link above.
Thanks a million Marco.
Getting a reasoned explanation at each step helped a lot to understand the process. I was concerned that at the start up of B, it would see the situation no differently than from an outage of A.
Your explanation of the import and its NEW Replica Generation construction is the key.
I can see now how the replication is locked awaiting reinitialization... this is shown on the dsadm import command line but I couldnt find any references in documentation about reloading multi-masters which are weird beasts.
We are only talking about 15000 entries... its handling a document tracking system for a Government Ministry... Civil servants are rarely at work after 4pm and NEVER at weekends so we have lots of out-time opportunities.
Last time the import of 15000 entries took about 5 minutes but 15 minutes to re-index!! The key issue was that the nsuniqueid value was to be preserved on Migrated entries (the major customer requirement) and the only way I could find to ensure it was was by db2ldif and an import.