We have a oracle standard Edition which do not support parallel , compression features of expdp.
The expdp dmp file is around 90GB and it takes 45 minutes to create in production server . Then the scripts compresses the file with gzip utility and compression takes 80 minutes.
To copy the compressed file from prod to staging server it takes another 47 minutes .
We have automated the process but it takes long time for expdp + compression + copy ( Around 3 hrs ) . On staging server it does take more than 4 hours to create the staging db.
Is there anyway I can improve the performance of these 3 operations .
Can I do compression while file is exporting ? I tried using pipes in unix and it doesn't work for expdp.
We don't want to use network link .
Will expdp commands writes the file sequentially ? If so , can I start gzipping parallely when files are exported .
Also tried compressing with gzip -1 option , but it has increased the file size by 30% , and eventually increased the copy time to staging server .
Edited by: 973089 on Nov 27, 2012 9:40 AM
Edited by: 973089 on Nov 27, 2012 9:41 AM
Why 'do not support parallel' ?
I understand you don't want to use database link, i had this problem here [i used expdp].
This is what i've done:
A script that do:
A full logic backup using expdp,
a bzip2 to compact,
and a transfer to the machine of destiny.
It would be far more easily if i could use the database link, but i couldn't.
however i used the parallel in expdp command.
Hope you find a good solution.
Thanks for your Reply.
We use Oracle Standard Editition (22.214.171.124) and it doesn't support parallel option in expdp. It doesn't support compression also.
Is there anyway I can compress parallely when the file is exported.