Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

How to copy the entire schema inside DB

$a!Jan 30 2019 — edited Feb 1 2019

Hi All,

I have a schema that has multiple DB objects and elements like tables, sequences, triggers and even scheduled jobs etc.

Now I want to copy the entire schema(including the scheduled jobs and all) and paste it in another empty schema I created in the same DB.

Is it possible to copy the entire schema at a go?

If so, Please help me on how I can do it.

Please suggest.

Regards,

Sail

Comments

Saubhik

What is your database version? You can try with datapump.

Cookiemonster76

Export with expdp

Import into new schema with impdp

Echezona Obiora

Hello,

Export the schema either using expdb or exp command when importing it, change the parameter schema name to the name of the new schema that you want to import it into. That will do.

If you like or know how to use Oracle SQL developer tool, you can create a connection for the two schema and then use database copy in the tools menu to copy the data from one schema to another.

Thanks,

Sergei Krasnoslobodtsev

Hi.

1) data pump:on-line/off-line
2) expdp/impdp: on-line/off-line
3) Oracle SQL Dev -> Export/Import or  Database Copy

P.S. The specific solution depends on the task

Manik

What is the version of Oracle and how big is the schema in terms of space and how many objects roughly are we talking about?

Cheers,

Manik.

Mike Kutz

Variation on the "expdp/impdp":

  • Create DB link to self.
  • Use impdp over DB Link with the "change schema" task.

MK

Saubhik

Sergey Krasnoslobodtsev wrote:

Hi.

1) data pump:on-line/off-line
2) expdp/impdp: on-line/off-line
3) Oracle SQL Dev -> Export/Import or Database Copy

P.S. The specific solution depends on the task

What is the difference between point 1 and 2 ?

Sergei Krasnoslobodtsev

There is no difference in the mechanism used.

The only difference is that you use either tools or write your own code block with package dbms_datapump .

Option 1 also takes place in practice.

$a!

I tried to use the Sql Developer Database copy mechanism, but had to stop in the middle since it was taking hours and hours of time.

Is there a better way to do it in sql developer also?

here already asked...
again, what is the volume of your scheme, how many objects?

$a!

Hi Sergey,

It is 1.6 GB.

I saw for one table containing 0.5 million rows, the script ran for 3 hours and I saw only 40 000 rows copied by then. So I had to stop the process in the middle.

$a!

520 objects.. Some of the tables have around half million rows

Cookiemonster76

Datapump may be faster than whatever approach SQL Developer is using (assuming it isn't using datapump under the hood).

But at the end of the day at takes as long as it takes, breaking off part way through just means you have to wait longer till the job is done - is there some limit to the amount of time it needs to take? And if there is what and why?

If not I suggest you just set it going to leave it to it (assuming you've made sure there's enough free disk space for an extra 1.6G on server)

BEDE

Sqldeveloper does not use datapaump: it's a client tool that can export whatever the user can see and does not require the privileges to dbms_datapump. Datapump writes to directories accessible by oracle, while sqldeveloper writes on the client machine. It definitely performs much worse than datapump.

Sergei Krasnoslobodtsev

Hi.

1.6 Gb is a small scheme according to modern criteria.

Costs you have when working with Sql developer can be a network, the disk of the client machine, and just as slowly written data to the base of the receiver.

You can use exp(dp)/imp (dp) , roughly so:

1)

  expdp user/pwd schemas=<your_schema> directory=<your_dricetory_on_server> dumpfile=<your_schema>.dmp logfile=expdp_<your_schema>.log

  impdp user/pwd schemas=<your_schema> directory=<your_dricetory_on_server>  dumpfile=<your_schema>.dmp logfile=impdp_<your_schema>.log

2) if you do not have access to the "directory" and there are no objects in ENCRIPTION TS,you can use

  exp userid=your_schema/passwd  recordlength=65535 file=your_schema.dmp log=your_schema.log

  imp userid=your_schema/passwd  recordlength=65535 file=your_schema.dmp log=imp_your_schema.log

3) other  client tools...

Mike Kutz

Sergey Krasnoslobodtsev wrote:

Hi.

1.6 Gb is a small scheme according to modern criteria.

Costs you have when working with Sql developer can be a network, the disk of the client machine, and just as slowly written data to the base of the receiver.

You can use exp(dp)/imp (dp) , roughly so:

1)

  1. expdp user/pwdschemas=<your_schema>directory=<your_dricetory_on_server>dumpfile=<your_schema>.dmplogfile=expdp_<your_schema>.log
  2. impdp user/pwdschemas=<your_schema>directory=<your_dricetory_on_server>dumpfile=<your_schema>.dmplogfile=impdp_<your_schema>.log

2) if you do not have access to the "directory" and there are no objects in ENCRIPTION TS,you can use

  1. exp userid=your_schema/passwdrecordlength=65535file=your_schema.dmplog=your_schema.log
  2. imp userid=your_schema/passwdrecordlength=65535file=your_schema.dmplog=imp_your_schema.log

3) other client tools...

I thought they stopped delivering "exp" with the client/server software.

Also, you left out the NETWORK_LINK option which by-passes the need to create a dump file in the  first place.

MK

Mark D Powell

$a!|, half a million rows is nothing and should export pretty quick, but it is not really the row count that matters it is the row count X average row length or total size in Meg or Gig that matters to the performance, that and the presence of LOB columns which mean LOB segments that also have to be exported.  Datapump cannot export LOB segments in parallel so LOB columns force single threading of the base table export.

- -

I have seen cases where exporting to a file and importing is faster than using the NETWORK_LINK option.  It depends on the quantity of data, presence of LOB segments, network capacity, etc....

- -

Since you are creating the new user/objects in the same database one option: Generate CTAS statement to create the new tables and populate them.  Use DBMS_MEATADATA or queries against the RDBMS dictionary views to create the constraints, indexes, grants, packages, etc...  You can create the tables and indexes using NOLOGGING then backup the database when you are done.

- -

Something to consider.  Sometimes speed is not critical.  That is, it does not make a lot of difference if the task is completed in 2 hours or 4 hours and the important point is how much DBA effort is required to support the task.  So an export import into a new schema that requires not DBA effort other than starting the export, checking for completion, launching the import and verifying complete may be better than a methodology that runs in half the time but requires you to activity perform the tasks.

- -

HTH -- Mark D Powell --

User_S2Z77

undefined (0 Bytes)Hello Mike, very nice variation. impdp expects dumpfile, how to change to use DB link so that it doesn't require dumpfile?
EDIT: I found the parameter:

NETWORK_LINK=source_database_link
Mike Kutz

🧟‍♂️☕️ cough fee
Please Read The Fine Manual for the HOWTO use the NETWORK_LINK optional parameter.

User_H3J7U
dbms_datapump(..., remote_link => sys.database_name)
tsangsir

Since you are copying to different user in the same DB, you will need some conversion if you include the original owner when referencing DB objects, e.g., stored proc and synonym, or you may mix up both schemas. Consider the following in source schema...

create procedure clear_table as
begin
	delete from source_schema.my_table;
end;
/

create synonym my_synonym for source_schema.my_table;

create view my_view as
select * from source_schema.my_table;

create public synonym pub_synonym for source_schema.my_table;

...

1 - 21

Post Details

Added on Jan 30 2019
21 comments
28,610 views