This content has been marked as final. Show 7 replies
Thanks, but not really.
I was more comparing the point of export of sequences in exp and expdp and noting that exp exports sequences last so the nextval will be greater than the max val of the pk using the sequence, whereas in expdp the sequences are exported before the data and hence if there is insert activity into a pk using the sequence then the exported sequence nextval may be lower than the mav val of the pk.
This has certainly been my experience, and I came up with the above cause as an explanation for the observed behaviour and wanted to see if my assumptions were correct. And if so would it be considered a feature enhancement (bug fix) to expdp to export sequences after data?
I am facing with the same issue. I have been thinking on the same lines as you were thinkilng or guessing. We always have to do a export datapump while the Database is open and transactions are going. Did you find any solution to this issue or we have to exclude the sequences first and then generate a separate script for the sequences that can be executed in the target schema which might keep the sequences in sync. ?? We certainly have the same issue on the PK columns ??
Thanks for your time.
I can think of 2 options that can be used to resolve this issue, may be it is crude but will help resolve the issue:
1) Make consistent export dumps using FLASHBACK_TIME switch. Consistent dump will not have this issue of having mismatch between PK column values and the sequences.
2) Once the import happens successfully, before start using the schema create a dynamic script for sequences with command like :
SELECT ' ALTER SEQUENCE ' || SEQUENCE_NAME|| ' START WITH ' || LAST_NUMBER+10000<It can be any big value> || ' INCREMENT BY ' || INCREMENT_BY|| ' ;' FROM USER_SEQUENCES.
I was testing expdp/impdp to get a good understanding on foreign key constraints, triggers, sequences, etc. The production sequence was at 25 but the test sequence was 41. Nobody was using the database (well, I was, but not doing anything).
CREATE SEQUENCE seq_holidays_pk;
From the 10G SQL Reference: If you omit both CACHE and NOCACHE, the database caches 20 sequence numbers by default.
So it started at 1, cached 20 (now at 21), used that 20, cached 20 more (now at 41), used 4 more (NEXTVAL at 25)
Well, there is the magic number 41 that was on our test database after the refresh.
Other than the ALTER SEQUENCE mentioned above I do not know of a way around the "cached" value that gets into the dump.
i to got same problem but fixed with a script ,
cursor cur_seq is
select rownum as rn,k.sequence_owner,k.sequence_name,k.last_number FROM dba_sequences k WHERE sequence_owner='APP';
function f_seq (v_seq_name varchar2,v_owner varchar2)
return number is
open v_result_set for
'select app.'||v_seq_name||'.nextval from dual';
fetch v_result_set into v_out;
for pk in cur_seq loop
v_seq_name := pk.sequence_name;
v_next_val := app.f_seq(pk.sequence_name,pk.sequence_owner);
v_last_val := pk.last_number;
if v_next_val<v_last_val then
for th in 1..(v_last_val-v_next_val) loop
v_next_val := f_seq(pk.sequence_name,pk.sequence_owner);
dbms_output.put_line(pk.rn||'.'||'v_seq_name := '||v_seq_name||' v_next_val:= '||v_next_val||' v_last_val := '||v_last_val);