Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Sequence behavior after importing via DataPump

KeenOnOracleSep 3 2013 — edited Sep 3 2013

Hi Friends,

I'm running Oracle DB 11.2.0.3 on Windows 2008 R2 SP1 Servers and I faced a strange sequences behavior after importing a schema via Data Pump.

The export is done this way:

EXPDP userid/password dumpfile= logfile= directory= remap_dumpfile=y (no news)

The import is done this way

IMPDP userid/password dumpfile= logfile= directory= remap_schema=(old_one:new_one) remap_tablespace=(old_ones:new_ones, so on...)

The import works fine. There are no errors and the sequences are as well imported with no warnings.

The strange behavior is that the sequences seems to "reset". When we call a sequence the NEXTVAL is just lower than the values already stored in the Database, and we get ORA-00001 a lot. The sequence should know that vale. I don't have this problem when using exp/imp, just via DataPump.

So that when we create an order that should receive the value of 100, as an example, because we have 99 orders on the system, Oracle suggest a value lower than 99 or even the number one value (01).

We then wrote a script to check the CURVAL of the sequences on the base schema to recreate the sequences using this initial value on the new imported schema.

Does anyone faced this problem before?

Any suggestions?

Tks a lot

This post has been answered by Richard Harrison . on Sep 3 2013
Jump to Answer

Comments

TSharma-0racle

Is this a public sequence? if yes,I beleive datapump does not export public objects so this smeans you are using the same sequence(already existed in your imported database from different schema). For example, if your current value is 100(in exported schema) and 97 in (imported schema) and you use the already existed sequence, you will get 98 which will be less than 100 and you will get this error.

for importing public objects;

SELECT 'CREATE PUBLIC SYNONYM ' || synonym_name || ' FOR '

|| table_owner || '.' || table_name || ';' cmd

FROM dba_synonyms

WHERE TABLE_OWNER IN ([list of schemas]) AND owner='PUBLIC';

KeenOnOracle

TSharma,

I think it is not a public object.

Here's the DDL for one of the sequences

CREATE SEQUENCE  "CLIS_LOG_SEQ"  MINVALUE 1 MAXVALUE 999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE  ORDER  NOCYCLE ;

How can I deal with this behavior on DataPump. Wonder why it does not happen with normal exp

Tks a lot

TSharma-0racle

Run this on both databases and paste the output;

select object_name,object_type,owner,status from dba_objects where object_name = 'CLIS_LOG_SEQ';

KeenOnOracle

TSharma.

It lists me these sequences. I'm working on the same database.

I want here to duplicate a schema called PROD to a schema called DEV on the same instance, but I need to change the tablespaces to organize data, because the schema is huge.

See:

OBJECT_NAME                    OBJECT_TYPE         OWNER                          STATUS

------------------------------ ------------------- ------------------------------ -------

CLIS_LOG_SEQ               SEQUENCE            PROD                            VALID

CLIS_LOG_SEQ               SEQUENCE            DEV                   VALID

Richard Harrison .

Hi,

I've just some quick tests and I can;t reproduce that issue (even with a schema remap) the nextval of the sequence is always more than the source system ( i even tried it with cache in case that was causing some kind of problem).

Are you extracting the original data with flashback_time=systimestamp to make sure the data does not further change during the extract?

Are you somehow referencing the wrong sequence in your selects - one thats in a different schema to the one you are remapping to?

Regards,

Harry

KeenOnOracle

Richard,

I'm not using flashback_time to export. the command is simple

expdp user/pass dumpfile= logfile= directory=

Is true that I export during the day, when users are connected changing data. But DP does not guarantee? this parameter is like the CONSISTENT on normal exp?

I think the selects are right because its done by the application. It calls the local sequence, I mean, the sequence of the schema we are connected to.

Do you think the parameter flashback_time=systimestamp would enforce consistence?

Richard Harrison .
Answer

Hi,

You should definitely make the export consistent - it's not be default in datapump (though in earlier versions you might think that it was due to misleading information messages it used to write).

You can either use flashback_time=systimestamp, lalshback_scn=xxxxx (where you have to work out what scn to use) or as you are on 11.2 you can even use consistent=y as oracle reintroduced it to make upgrades from exp easier for people.

This might fix the issue, however if the number is being reset to 1 in some cases then it may be some other problem.

Cheers,

Harry

Marked as Answer by KeenOnOracle · Sep 27 2020
KeenOnOracle

Richard

I've tried what you just said.

Adding the parameter consistent=y makes Oracle to show a message like that at the beginning of the export

"flashback_time=TO_TIMESTAMP('2013-09-03 12:18:12', 'YYYY-MM-DD HH24:MI:SS')"

It warns me: Legacy Parameter CONSISTENT=TRUE, and replaces with flashback_time.

Really, I did not know about this behavior with this "old" parameter. I'm very appreciated about your help.

I was almost thinking it was a DataPump bug or something.

Thanks a lot Richard. I'll now update my scripts and make lots of test.

If you have more advices using this parameter please share us.

Cheers

1 - 8
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Oct 1 2013
Added on Sep 3 2013
8 comments
5,442 views