This discussion is archived
8 Replies Latest reply: Sep 11, 2013 7:41 AM by marcusafs RSS

Moving Data Between Enviroments

chillychin Newbie
Currently Being Moderated

I am pretty sure I know the answer, but was curious if someone out there may have devised a working solution

 

Would anyone know how to migrate data between environments?

 

I have development (DEV), QA and production (PROD)

 

Obviously we have indexes and keys, and these keys are different in each environment.

 

I was wondering is there anyway to "sync" the data between the environments including indexes and their values?

 

I know we can generate scripts to copy the index data and table structure, but Im actually looking specifically to preserve the data between environments

 

Its a real pain in the butt when I run a query in DEV and then need to change all the values to have it work in QA due to the index values being different

  • 1. Re: Moving Data Between Enviroments
    chillychin Newbie
    Currently Being Moderated

    Sorry I meant to say I know we can generate scripts to copy the index and table structure, but I was looking to also copy the data contents of the table exactly

  • 3. Re: Moving Data Between Enviroments
    marcusafs Journeyer
    Currently Being Moderated

    I like using a tool such as SQL Developer Data Modeler to define the structure and then use it to deploy changes from Dev to Test/QA to Production.  It can also be used to engineer from prod back to dev or test  When moving data I have created an EZ button for my developers that uses database links, truncate and insert statements or merge statements for the data synch.  The EZ buttons are broken down by application.  For columns that do not exist in prod I use default values in the EZ button for Dev or Test.

     

    Marcus Bacon

  • 4. Re: Moving Data Between Enviroments
    chillychin Newbie
    Currently Being Moderated

    I actually havent used data pump before myself

     

    Would using an insert or merge preserve the primary key index values?

     

    Such as if in PROD I have a primary key of (1, 2, 3, 4) which is auto generated from an index

     

    1 X

    2 Y

    3 Z

    4 A

     

    I would prefer to see the exact same lay out in DEV

     

    1 X

    2 Y

    3 Z

    4 A

  • 5. Re: Moving Data Between Enviroments
    marcusafs Journeyer
    Currently Being Moderated

    Some of your terminology makes me wonder if you are using Oracle.

    Would using an insert or merge preserve the primary key index values?

    Are you referring to the index that enforces the primary key or the primary key columns?  You shouldn't care about the PK index, just rebuild it after refresh.

     

    primary key of (1, 2, 3, 4) which is auto generated from an index

    Oracle uses sequence generators and triggers (pre 12c which can have identity columns) or application code to populate surrogate keys.

     

    If you are using Oracle, then insert or merge will preserve the PK if you disable all triggers that are populating the surrogate key.  My EZ button disables foreign key constraints and triggers before merging or truncating and inserting and then rebuild the indexes after refresh.  Our data, with the exception of sensitive information, is an exact copy of production that analysts use to debug applications or fix data.

     

    Data Pump is a good candidate for moving data.  I played with it a little when it first came out using the network scenario but performance was slow.  Performance has improved greatly over the years so I need to go back and look at it again.

  • 6. Re: Moving Data Between Enviroments
    Priyasagi Pro
    Currently Being Moderated

    Hi,

    Following discussion may be helpful

    Sync between two tables.

  • 7. Re: Moving Data Between Enviroments
    chillychin Newbie
    Currently Being Moderated

    I am actually using Oracle 11, though its been a long while since I last dealt with tables, indexes and the such so forgive my loose use of terminology

     

    That was one of the concerns I was trying to allude to, the surrogate keys getting pre populated through triggers.

     

    I hate to sound silly, but what is an EZ button?

     

    And for rebuilding an index, what does that do exactly? I was trying to google around and wasnt able to find anything that talks about what it does.

     

    Would it be possible to also "sync" sequences between environments too? I know manually I can go in, and select the max sequence number and recreate the sequence in the target database using that number, just curious if there was also an easier way

     

    Thanks for bearing with me

  • 8. Re: Moving Data Between Enviroments
    marcusafs Journeyer
    Currently Being Moderated

    The EZ button is our term for procedures that call refresh procedures in packages organized by schema.  It makes it easy for the developers to set up the development data to look just like the data in production and it is easy for me because I do not have to get involved.

     

    The general flow is as follows:

     

    Pre Processing - Disable foreign keys and triggers in all the tables according to functional area, usually an application scope.

    Processing - To preserve test data run the freshen procedure.  This calls the procedure that uses MERGE and database links to update/insert.

                        To completely replace table data run the refresh procedure.  This calls the procedure that uses TRUNCATE and INSERT with database links.

                       On tables that use sequence generators to populate surrogate keys, I have written a procedure that determines the max value and resets the sequence.

    Post Processing - Enable foreign keys and triggers that were disabled here.  These are saved in a collection declared in the package body.  If the freshen or refresh procedure fails, do nothing else.  If it succeeds, rebuild all the indexes in tables used by the application and re-compute statistics for all schemes involved.

     

    To identify the tables that are used by an application I use a tag in the table comments.  The tag is an abbreviation for the application followed by two colons "MYAPP::".   It is a brute force approach but it gets the job done in just a few minutes for <50 tables per application.  It takes about 45 minutes for a commercial database with 2500 tables. I have also written a generator that does the work of writing the procedures and the package.  THe generator was very handy for the large number of tables.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points