This content has been marked as final. Show 5 replies
As usual the answer is "it depends".
It depends on your I/O throughput, CPU, level of paralelization you set for the job, some config options you can set on the job (if it will sample 100% or 20% of the segments for instance), even how the data is stored on the segments.
I really don't see how you could provide a correct estimation for them. This is one of those things really hard to predict time-wise imo.
The only thing you could get an estimate is if you run it, and monitor it through the v$session, and v$session_longops.
You can monitor using v$session_longops - try this
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;
You can't estimate the duration withour running the gather_database_stats. You can monitor it while it is running. (e.g. count the tables that have got updated statistics based on DBA_TABLES)
And if you run gather_database_stats, the default behaviour might well be to exclude tables which aren't "stale" while in the imported database the expectation is to gather stats on all tables.
(Similarly, the behaviour to gather column histograms (method_opt) may differ in an existing database from a newly imported database)
Hemant K Chitale
Edited by: Hemant K Chitale on May 14, 2013 10:13 AM