I am currently moving tables around in order to reclaim free space on the DB. I am looking at datapump as a method of moving the data as I need to avoid large redo log generation.
My understanding is that impdp uses direct path loading of the data while not using any conventional SQL techniques. Thus avoiding, or only causing minimal redo log generation.
After reading up on datapump and redo logging, I am now not too sure of the accuracy of my understanding. I see people using techniques such as below to avoid redo log generation.
The solution is to import in two phases :
•First phase : import only the metadata (impdp CONTENT=METADATA_ONLY .... )
•Set all the tables/indexes to NOLOGGING mode
•Second phase : import only data (impdp CONTENT=DATA_ONLY .... )
•Set all the tables/indexes to LOGGING mode
My question is: Is my understanding of redo log generation accurate with impdp? Does impdp only generate minimal redo logging as a default?
Data Pump can use 3 different methods for loading data. It can use 'direct path', 'external table', and 'conventional mode'. I don't have a great understanding of all of these, but I would think that the last 2 would use more redo log than direct path. What you are proposing to get around the logging would be the way I would attempt it. You could try this on a small scale to see if it accomplishes what you need it to. (Try it without disabling logging and then with disabling logging and then compare the difference.) From what I have been told, you will see a lot less redo that way.
also, if you want to force direct path you can do:
By forcing direct path, some data may not be able to be loaded. Direct path does not support all object types.
Hope this helps.
You can do what you want to do in one step.
Set your tablespace(s) to nologging and then add this clause to the impdp command
the logging clause is then removed from the create table statement and it makes the whole process much easier.
Then you set everything back to logging afterwards.
I am trying to reclaim space in the DB which involves moving objects around. The DB is setup with a Data Guard replication to a DR site. I wanted to avoid excessive redo log generation as we have consistent problems with our bandwidth between the sites, causing log shipping to slow down or fail.