I like using a tool such as SQL Developer Data Modeler to define the structure and then use it to deploy changes from Dev to Test/QA to Production. It can also be used to engineer from prod back to dev or test When moving data I have created an EZ button for my developers that uses database links, truncate and insert statements or merge statements for the data synch. The EZ buttons are broken down by application. For columns that do not exist in prod I use default values in the EZ button for Dev or Test.
Some of your terminology makes me wonder if you are using Oracle.
Would using an insert or merge preserve the primary key index values?
Are you referring to the index that enforces the primary key or the primary key columns? You shouldn't care about the PK index, just rebuild it after refresh.
primary key of (1, 2, 3, 4) which is auto generated from an index
Oracle uses sequence generators and triggers (pre 12c which can have identity columns) or application code to populate surrogate keys.
If you are using Oracle, then insert or merge will preserve the PK if you disable all triggers that are populating the surrogate key. My EZ button disables foreign key constraints and triggers before merging or truncating and inserting and then rebuild the indexes after refresh. Our data, with the exception of sensitive information, is an exact copy of production that analysts use to debug applications or fix data.
Data Pump is a good candidate for moving data. I played with it a little when it first came out using the network scenario but performance was slow. Performance has improved greatly over the years so I need to go back and look at it again.
I am actually using Oracle 11, though its been a long while since I last dealt with tables, indexes and the such so forgive my loose use of terminology
That was one of the concerns I was trying to allude to, the surrogate keys getting pre populated through triggers.
I hate to sound silly, but what is an EZ button?
And for rebuilding an index, what does that do exactly? I was trying to google around and wasnt able to find anything that talks about what it does.
Would it be possible to also "sync" sequences between environments too? I know manually I can go in, and select the max sequence number and recreate the sequence in the target database using that number, just curious if there was also an easier way
Thanks for bearing with me
The EZ button is our term for procedures that call refresh procedures in packages organized by schema. It makes it easy for the developers to set up the development data to look just like the data in production and it is easy for me because I do not have to get involved.
The general flow is as follows:
Pre Processing - Disable foreign keys and triggers in all the tables according to functional area, usually an application scope.
Processing - To preserve test data run the freshen procedure. This calls the procedure that uses MERGE and database links to update/insert.
To completely replace table data run the refresh procedure. This calls the procedure that uses TRUNCATE and INSERT with database links.
On tables that use sequence generators to populate surrogate keys, I have written a procedure that determines the max value and resets the sequence.
Post Processing - Enable foreign keys and triggers that were disabled here. These are saved in a collection declared in the package body. If the freshen or refresh procedure fails, do nothing else. If it succeeds, rebuild all the indexes in tables used by the application and re-compute statistics for all schemes involved.
To identify the tables that are used by an application I use a tag in the table comments. The tag is an abbreviation for the application followed by two colons "MYAPP::". It is a brute force approach but it gets the job done in just a few minutes for <50 tables per application. It takes about 45 minutes for a commercial database with 2500 tables. I have also written a generator that does the work of writing the procedures and the package. THe generator was very handy for the large number of tables.