1 person found this helpful
Is this a public sequence? if yes,I beleive datapump does not export public objects so this smeans you are using the same sequence(already existed in your imported database from different schema). For example, if your current value is 100(in exported schema) and 97 in (imported schema) and you use the already existed sequence, you will get 98 which will be less than 100 and you will get this error.
for importing public objects;
SELECT 'CREATE PUBLIC SYNONYM ' || synonym_name || ' FOR '
|| table_owner || '.' || table_name || ';' cmd
WHERE TABLE_OWNER IN ([list of schemas]) AND owner='PUBLIC';
I think it is not a public object.
Here's the DDL for one of the sequences
CREATE SEQUENCE "CLIS_LOG_SEQ" MINVALUE 1 MAXVALUE 999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE ORDER NOCYCLE ;
How can I deal with this behavior on DataPump. Wonder why it does not happen with normal exp
Tks a lot
Run this on both databases and paste the output;
select object_name,object_type,owner,status from dba_objects where object_name = 'CLIS_LOG_SEQ';
It lists me these sequences. I'm working on the same database.
I want here to duplicate a schema called PROD to a schema called DEV on the same instance, but I need to change the tablespaces to organize data, because the schema is huge.
OBJECT_NAME OBJECT_TYPE OWNER STATUS
------------------------------ ------------------- ------------------------------ -------
CLIS_LOG_SEQ SEQUENCE PROD VALID
CLIS_LOG_SEQ SEQUENCE DEV VALID
I've just some quick tests and I can;t reproduce that issue (even with a schema remap) the nextval of the sequence is always more than the source system ( i even tried it with cache in case that was causing some kind of problem).
Are you extracting the original data with flashback_time=systimestamp to make sure the data does not further change during the extract?
Are you somehow referencing the wrong sequence in your selects - one thats in a different schema to the one you are remapping to?
I'm not using flashback_time to export. the command is simple
expdp user/pass dumpfile= logfile= directory=
Is true that I export during the day, when users are connected changing data. But DP does not guarantee? this parameter is like the CONSISTENT on normal exp?
I think the selects are right because its done by the application. It calls the local sequence, I mean, the sequence of the schema we are connected to.
Do you think the parameter flashback_time=systimestamp would enforce consistence?
You should definitely make the export consistent - it's not be default in datapump (though in earlier versions you might think that it was due to misleading information messages it used to write).
You can either use flashback_time=systimestamp, lalshback_scn=xxxxx (where you have to work out what scn to use) or as you are on 11.2 you can even use consistent=y as oracle reintroduced it to make upgrades from exp easier for people.
This might fix the issue, however if the number is being reset to 1 in some cases then it may be some other problem.
I've tried what you just said.
Adding the parameter consistent=y makes Oracle to show a message like that at the beginning of the export
"flashback_time=TO_TIMESTAMP('2013-09-03 12:18:12', 'YYYY-MM-DD HH24:MI:SS')"
It warns me: Legacy Parameter CONSISTENT=TRUE, and replaces with flashback_time.
Really, I did not know about this behavior with this "old" parameter. I'm very appreciated about your help.
I was almost thinking it was a DataPump bug or something.
Thanks a lot Richard. I'll now update my scripts and make lots of test.
If you have more advices using this parameter please share us.