2 Replies Latest reply on Feb 18, 2020 6:36 PM by Dean Gagne-Oracle

    expdp dumpfile transfer options

    user12299873

      Hi ,

      We are planning to migrate prod database using  expdp/impdp , since the dump file size is bit huge , and we need to plan for minimal downtime ,

       

      > is it possible to transfer the dumpfile while export is in progress , like while the export writing on file 3 shall we transfer dumpfile 2 ..

        • 1. Re: expdp dumpfile transfer options
          Gaz in Oz

          When you say "bit huge" how many terabytes in size is it?

          You might be better off looking at creating a standby in mount mode using  dataguard to get the standby to a point you are happy then stop the dataguard feed and bring the standby db up in open mode... or use a rman backup of db and restore it to "where ever", no down time using either method.

          • 2. Re: expdp dumpfile transfer options
            Dean Gagne-Oracle

            The answer is no, you never really know when the dumpfile is full.   Can you perform a target side export?   For example, you have source a and target b, from b database can you run:

             

            expdp user/password network_link=db_link_defined_on_b_pointing_to_source_a directory=local_b_directory_object dumpfile=xxx ...

             

            This way, when you export, you are directly writing them to the target database.   We call this network export.

             

            You may also be able to use network import.  It is one job that exports from the source database and instead of writing a file, it immediately imports into the target database.  It is done like this:

             

            Again, you run this from the target database:

             

            impdp user/password network_link=db_link_defined_on_b_pointing_to_source_a directory=local_b_directory_object ...

             

            This one does not need a dumpfile but does need a directory object for the log file.    This will essentially run an export job on the source database and an import job on the target database.

             

            We have seen customers move 3TB of data in an hour.   Granted they had the hardware to do that.

             

            Another potential way to do this is using transportable tablespaces with Data Pump.   It has some limitations and restrictions, but if that works for you, it could b a lot faster.

             

            Hopes some of this helps.

             

            Dean