I just want to verify my thinking and see if others agree.
When I used to use the old exp and imp inever had any trouble with sequences after import even if the data was changing whilst the export ran.
With datapump I often get pk errors due to sequences allocating used numbers after import from an export taken while data was changing.
Since sequences do no operate within transactions then they will not be consistent with the data being exported. This was less of a problem with the old export because sequences were exported after the objects they are used by so if they had incremented then they would be greater than the data exported. With datapump the sequences are exported before the data and this is sequence start value that will be imported even though the sequence may have been incremented after this point.
Does this sound fair and reasonable. I just want to be sure that it is only my sequences that are out of sync and that my data is consistent.
Also the old consistent=y to make the entire export consistent (not just the individual objects) is not used in datapump as the exports from datapump are always consistent. Is this correct?
I was more comparing the point of export of sequences in exp and expdp and noting that exp exports sequences last so the nextval will be greater than the max val of the pk using the sequence, whereas in expdp the sequences are exported before the data and hence if there is insert activity into a pk using the sequence then the exported sequence nextval may be lower than the mav val of the pk.
This has certainly been my experience, and I came up with the above cause as an explanation for the observed behaviour and wanted to see if my assumptions were correct. And if so would it be considered a feature enhancement (bug fix) to expdp to export sequences after data?
I am facing with the same issue. I have been thinking on the same lines as you were thinkilng or guessing. We always have to do a export datapump while the Database is open and transactions are going. Did you find any solution to this issue or we have to exclude the sequences first and then generate a separate script for the sequences that can be executed in the target schema which might keep the sequences in sync. ?? We certainly have the same issue on the PK columns ??
I was testing expdp/impdp to get a good understanding on foreign key constraints, triggers, sequences, etc. The production sequence was at 25 but the test sequence was 41. Nobody was using the database (well, I was, but not doing anything).
CREATE SEQUENCE seq_holidays_pk;
From the 10G SQL Reference: If you omit both CACHE and NOCACHE, the database caches 20 sequence numbers by default.
So it started at 1, cached 20 (now at 21), used that 20, cached 20 more (now at 41), used 4 more (NEXTVAL at 25)
Well, there is the magic number 41 that was on our test database after the refresh.
Other than the ALTER SEQUENCE mentioned above I do not know of a way around the "cached" value that gets into the dump.
function f_seq (v_seq_name varchar2,v_owner varchar2)
return number is
open v_result_set for
'select app.'||v_seq_name||'.nextval from dual';
fetch v_result_set into v_out;
for pk in cur_seq loop
v_seq_name := pk.sequence_name;
v_next_val := app.f_seq(pk.sequence_name,pk.sequence_owner);
v_last_val := pk.last_number;
if v_next_val<v_last_val then
for th in 1..(v_last_val-v_next_val) loop
v_next_val := f_seq(pk.sequence_name,pk.sequence_owner);
dbms_output.put_line(pk.rn||'.'||'v_seq_name := '||v_seq_name||' v_next_val:= '||v_next_val||' v_last_val := '||v_last_val);