- Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
- Place the file to be imported on a separate physical disk from the oracle data files
- Set the LOG_BUFFER to a big value and restart oracle.
- Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
- Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
- Use COMMIT=N in the import parameter file if you can afford it
- Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
- Remember to run the indexfile previously created
Besides all the stuff TSharma said, the big slowdown is probably the handling of the errors.
Three ways to deal with it off the top of my head:
Create a text file with sqlplus to load with sql*loader DIRECT load (if you can), removing the dups before writing text file (how to skip duplicates is a faq).
Create the table in a different schema without the constraint, transfer over to real table with a statement that skips duplicates. It may also help to do this on originating side, then exp/imp with DIRECT and recordlength=65535.
Use the QUERY parameter on export to remove duplicates (I haven't thought through if that is possible).