Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Simple insert into table hanging

AParmNov 10 2016 — edited Nov 11 2016

Hi there,

I have a query that joins 2 tables, 1 table has 2 million records and other 1 million records, the query returns 1 record in less than 1 second, both the 2 tables being joined are accessed across a database link, despite the query running in less than a second, if I do a simple insert using the query it just hangs, and I don't understand why. The table is not locked, I have tried dropping the table before the insert and recreating it, to ensure there are no locks.

If I look at the wait events all  I a can see this

I am running 12c on Windows, the database link is to database in another pluggable database in the same container.

SQL*Net message from dblink

This post has been answered by AParm on Nov 11 2016
Jump to Answer

Comments

OrionNet
Hello Steve,

Check this link out for examples http://www.devx.com/dbzone/Article/30355/0/page/3*

Regards
stevepence
Much appreciated. That gives me two addition examples, for a grand total (with what's in Utilities) of five.

A couple of points really have me confused.

First is the mysterious parameter "object_path." One of the examples referenced is the following;

dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_EXPR', value => 'LIKE ''ALL_BILL%''', object_type => 'TABLE');

That's pretty clear, but my version (10.2) doesn't have a parameter called object_type. Mine has something called object_path, helpfully defined in the Packages Reference as "The object path to which the filter applies." Gosh, now I know what it means. :( If I look in the view TABLE_EXPORT_OBJECTS I see things that look kind of like objects, not paths, but no entry explicitly called 'TABLE" So how would I duplicate this example using my version?

Another little mystery is the syntax of the NAME_LIST value. Might it be a single quoted string with unquoted entries comma separated ? I suppose it might, but then that's only a guess.

There are lots of other mysteries - the filter is obviously very powerful, but those 2 issues - understanding object path and name list syntax will get me a ways down the road.

Thanks,
Steve
665135
Hi Steve,

Let me see if I can clarify some of this. First, object_type vs. object_path. (I understand the confusion; we’re not always perfect about distinguishing the two.)

The object_path parameter was added to dbms_datapump.metadata_filter in 10.2. The object_type parameter was retained from 10.1 for backward compatibility. The routine accepts either one.

Metadata objects in the database belong to different types, e.g., TABLE, INDEX, etc. The set of object types supported by the data pump metadata API (dbms_metadata) is listed in PL/SQL Packages and Types Reference (10.2), Table 57-11. (dbms_datapump calls dbms_metadata internally.)

The set of object types in an export job are arranged in a tree. Each node in the tree has one or more path names. A node’s path name is the node’s name, optionally prefixed by the names of its ancestors, separated by slashes. If you run the expdp utility, the path names of the exported object types are displayed in the log, e.g.,

Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
etc.

Any path name denotes the named node and all of its descendants. In the example above, the path name TABLE (or SCHEMA_EXPORT/TABLE) would denote all four of the above object types, i.e., the schema’s tables and all of the tables’ dependent objects.

In practice, you can typically use an object type name as the object_path parameter and get the expected result. Thus, if you wanted to export SCOTT’s schema but omit SCOTT.EMP and its dependent indexes, grants, etc., you could write

dbms_datapump.metadata_filter(handle => my_handle, name => ‘NAME_EXPR’, value => ‘!=’’EMP’’’, object_path => ‘TABLE’);

Second, the NAME_LIST syntax. You were close: it’s a quoted string of comma-separated quoted entries, e.g.,

dbms_datapump.metadata_filter(my_handle, 'SCHEMA_LIST','''SCOTT'', ''HR''');

(Note that you have to use ‘’ inside the string in order to escape the quotes properly.)

You initially asked for examples, and I can try to provide some, but it would help if you gave me an idea of what sorts of things you want to do.

Regards,
Lee
stevepence
Dear Lee,

What an incredibly helpful post. Thanks for taking the time to do a good explanation of the object path. Huge. Have you thought of writing up a metalink note with this info? I bet it would be much appreciated.

I have several schemas we use for in house development that I need to frequently "move" back and forth between development and qa. Plus moving data from production. to those other tiers. There are a number of different situations that come up. Typically I drop several schemas and import them intact, but then I have to monkey with running scripts to fix things up depending on the development situation (testing,migration, etc). I'm seeing a lot flexibility here in the data pump api to include and exclude different groups of say tables and packages and grants with potentially different include/exclude criteria for each schema included with the import.

If I saw some fairly fancy, real world examples, (well commented) I could mentally put the pieces together of how the api works.

Thanks again for your great help.
Steve
665135
Hi Steve,

I’ve been delayed by the holidays and a cold. Here’s some more information for you plus a few examples.

(First, though, I’m wondering why you don’t simply use the expdp and impdp utilities to move the schemas back and forth, using the “exclude” parameter to filter out the objects you don’t want? That might be simpler than using the API.)

You wanted examples of ways to include/exclude different groups of objects. Assume this is the basic program:

DECLARE
h1 NUMBER;
BEGIN
h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');

You can use either SCHEMA_EXPR or SCHEMA_LIST to specify the schemas to be exported, either

DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''HR'')');

or

DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_LIST','''HR''');

For including/excluding individual objects and whole object types, there are two separate mechanisms:

1. To include or exclude individual named objects (e.g., tables, procedures, etc.) use the NAME_EXPR or NAME_LIST filter along with the object_path parameter:

DBMS_DATAPUMP.METADATA_FILTER(h1,'NAME_EXPR','LIKE PAYROLL%','VIEW');

Result: the only views exported in the job are those with names that begin with ‘PAYROLL’. (As you might imagine, the filter value ‘LIKE PAYROLL%’ is used in a SQL expression in the query used to fetch the view definitions.) As I mentioned in the earlier post, the object_path parameter ‘VIEW’ will cause the correct dependent objects (grants, etc.) to be included/excluded along with the views.

2. To include or exclude an entire object type, use INCLUDE_PATH_EXPR, EXCLUDE_PATH_EXPR, INCLUDE_PATH_LIST, EXCLUDE_PATH_LIST. For example, if you want to exclude all functions, procedures and packages from the job:

DBMS_DATAPUMP.METADATA_FILTER(h1,'EXCLUDE_PATH_EXPR’,'IN (''FUNCTION'', ''PROCEDURE'',''PACKAGE'')');

(If you wanted to exclude a particular named procedure you would use the NAME_EXPR filter with the ‘PROCEDURE’ object path.)

One last point: be clear about the difference between INCLUDE and EXCLUDE. If you use an EXCLUDE_PATH_* filter, everything except the object types specified by the filter value will be in the job. If you use an INCLUDE_PATH_* filter, only the object types specified by the filter value will be in the job.

Is this enough to get you started? If there are specific things you want to do and can’t figure out, feel free to post them.

Regards,
Lee
stevepence
Thanks, Lee. That helps quite a bit. I appreciate your time on this.

Steve
682429
I have been digging around for anything useful on DBMS_DATAPUMP and have to say this is the best set of examples/explanation that I have found.

I have run into one problem and wanted to see if anyone can answer this. We are running 10.2.0.4 and found that due to our parameter settings for optimizer_features_enable, optimizer_index_cost_adj, and optimizer_index_caching, whenver we do a data pump export (command line version) of many tables (say over 100 tables), it just hangs. The work around is to adjust these parameters and run it. But this is not a very good option since many applications that run against the db are dependent on these settings and run very poorly if we change them. So this brings me to the fact that I'm attempting to write an imitation of impdp/expdp but it will run using a database package/procedure (dbms_datapump calls) that will adjust these parameters at the session level so we do not experience the hang and do not interfere with application processing.

The question I have, is there a way to implement encryption_password? I am not seeing anything in the support doc for DBMS_DATAPUMP for Oracle 10g. It appears that this is supported in 11g.

Any comments are greatly appreciated.

Thanks,

- Tim

Edited by: user2968083 on Jan 30, 2009 2:47 PM
99223
I'm having similar problems as the original poster.

FOr this example, I'm connected as system.

[U2017067:]/cygdrive/c/dev/Insight/Manager/Database/dbsrc/dbo/packages>sqlplus system/betelgeuse@polecat

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Mar 30 12:12:01 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select count(*) from ods.product;

COUNT(*)
----------
30

If I write the following:

DBMS_DATAPUMP.METADATA_FILTER (handle=>dp_handle,
name=>'NAME_LIST',
value=>'(''PRODUCT'')',
object_path=>'TABLE');

I get the following result on the log:

Starting "ODS"."MARY":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ODS"."PRODUCT" 24.21 KB 30 rows
ORA-39165: Schema ('ODS') was not found.
ORA-39166: Object ('PRODUCT') was not found.
Master table "ODS"."MARY" successfully loaded/unloaded
******************************************************************************
Dump file set for ODS.MARY is:
C:\ORACLE\PRODUCT\ADMIN\POLECAT\DPDUMP\BRANDY.DMP
Job "ODS"."MARY" completed with 2 error(s) at 13:00:28


Did anything get exported or not? It says that 30 rows were exported, but then it tells me that the object was not found and the schema was not found. What gives? I just can't believe that this should be hard.

And what is the syntax for exporting more than one table, using the name_list. Can someone shed some light on the name_list vs name_expr? It seems like it ought to be obvious, yet any syntax I've tried for constructing a name_list of more than one items meets with run-time malformed expression errors.

All I want to do is pass an explicit list of tables whose names cannot be described using wild-card characters. I've searched everywhere for this, and as the OP said, the examples given are usually very trivial.
OrionNet
Hello,

Make sure ODS schema exists and you can connect to and use this example and replace with your schema name and table names along with your directory (using EXPORT_DIR in this example).
DECLARE
   h1   NUMBER;
BEGIN
   BEGIN
      h1 :=
         DBMS_DATAPUMP.open (operation   => 'EXPORT',
                             job_mode    => 'TABLE',
                             job_name    => 'MYTABLEJOB',
                             version     => 'COMPATIBLE');
   END;

   BEGIN
      DBMS_DATAPUMP.set_parallel (handle => h1, degree => 1);
   END;

   BEGIN
      DBMS_DATAPUMP.add_file (handle      => h1,
                              filename    => 'EXPDAT.LOG',
                              directory   => 'EXPORT_DIR',
                              filetype    => 3);
   END;

   BEGIN
      DBMS_DATAPUMP.set_parameter (handle   => h1,
                                   name     => 'KEEP_MASTER',
                                   VALUE    => 0);
   END;

   BEGIN
      DBMS_DATAPUMP.metadata_filter (handle   => h1,
                                     name     => 'SCHEMA_EXPR',
                                     VALUE    => 'IN(''MYSCHEMA'')');
   END;

   BEGIN
      DBMS_DATAPUMP.metadata_filter (
         handle   => h1,
         name     => 'NAME_EXPR',
         VALUE    => 'IN(''CUSTOMER'',''WORKERS'')'
      );
   END;

   BEGIN
      DBMS_DATAPUMP.add_file (handle      => h1,
                              filename    => 'mytable.dmp',
                              directory   => 'EXPORT_DIR',
                              filetype    => 1);
   END;

   BEGIN
      DBMS_DATAPUMP.set_parameter (handle   => h1,
                                   name     => 'INCLUDE_METADATA',
                                   VALUE    => 1);
   END;

   BEGIN
      DBMS_DATAPUMP.set_parameter (handle   => h1,
                                   name     => 'DATA_ACCESS_METHOD',
                                   VALUE    => 'AUTOMATIC');
   END;

   BEGIN
      DBMS_DATAPUMP.set_parameter (handle   => h1,
                                   name     => 'ESTIMATE',
                                   VALUE    => 'BLOCKS');
   END;

   BEGIN
      DBMS_DATAPUMP.start_job (handle => h1, skip_current => 0, abort_step => 0);
   END;

   BEGIN
      DBMS_DATAPUMP.detach (handle => h1);
   END;
END;
/
Regards
99223
Thanks. I ran your example, and it worked fine. Then I substituted your expressions syntax in my code and it worked. Looks like my mistake was actually occuring because I was building the expression programmatically and forgot that doubled quotations will be reduced to one when displayed out (for debugging purposes). I mistakenly thought that the doubled quotes needed to actually appear when I viewed the contents of the string.

Here are a couple of other questions:

1) Your example uses 2 set_parameter values that I don't see in the documentation for either oracle 10 or oracle 11 (I'm using 10)...... KEEP_MASTER and DATA_ACCESS_METHOD (AUTOMATIC). Are these documented anywhere? Any others that one might want to know about?

2) It appears that I must remove the dump file myself if one already exists by that name. Is that correct, or is there some setting on some parameter that will automatically overwrite any existing dump file?

3) Do I also need to delete the master table when I start up a new job?

4) Is there a maximum length that the lists and expressions can be? I'm running into "badly formed" errors when the string is very long. I haven't yet determined the exact longest length that works. When the string exceeds 4000 bytes, I get these errors:

39001: invalid argument value
39071: Value for NAME_EXPR is badly formed.
01756: quoted string not properly terminated

5) I have no problems running the commandline client for dpexp for the entire database. Yet when I try to do just a fraction of the tables using the API, I get out of process memory errors. What am I doing wrong?


Thanks for your help.

Edited by: mstamper on Mar 30, 2009 2:36 PM

Edited by: mstamper on Mar 30, 2009 2:54 PM
Dean Gagne-Oracle
Hi,

To answer your first post, yes, you table with 30 rows were exported. Since you didn't list your complete pl/sql block, it is difficuly to say why the last 2 errors were reported. These errors are generally reported when the datapump determines that filters that were specified were not found in the final job. Because I can't see the complete pl/sql block, it is difficult to determine why the datapump job didn't think they were being exported.

To answer your other questions:

1) Your example uses 2 set_parameter values that I don't see in the documentation for either oracle 10 or oracle 11 (I'm using 10)...... KEEP_MASTER and DATA_ACCESS_METHOD (AUTOMATIC). Are these documented anywhere? Any others that one might want to know about?

Neither of these parameters are needed. The first tells the datapump job to keep the master table around. Mostly for debugging purposes. It is exported in the dumpfile so it can always be obtained. This is not something that you want to keep using. It will fill your schema with tables that won't be any use to you and will eat up valuable space. Also, if you name your job, you will only be able to run it once, then you will either have to rename the job because the table will already exist when the datapump job starts, or you will have to manually drop the master table before you run your job again. If you don't name your job, then the datapump creates a master table name for you using SYS_EXPORT_mode_nn. Mode is TABLE, FULL, SCHEMA, and TABLESPACE. nn is 01 through 99. If you keep the master table master_table=y, then when you run 99 jobs, the datapump can no longer create a this table because all of the tables from 01-99 will be used and your datapump job can not be started until you start deleting these master tables. SO - IT IS RECOMMENDED THAT YOU DO NOT USE KEEP_MASTER=Y for normal use.

As far as ACCESS_METHOD=AUTOMATIC is the default, so it is not needed

2) It appears that I must remove the dump file myself if one already exists by that name. Is that correct, or is there some setting on some parameter that will automatically overwrite any existing dump file?

This is correct. Starting in 11.1, there is a new parameter REUSE_DUMPFILES=Y will do what you want, but for 10, there is nothing like that.

3) Do I also need to delete the master table when I start up a new job?

Answered in 1, basically, if you name your job JOB_NAME=my_datapump_job then you will have to drop this table before the next export/import job.

4) Is there a maximum length that the lists and expressions can be? I'm running into "badly formed" errors when the string is very long. I haven't yet
determined the exact longest length that works. When the string exceeds 4000 bytes, I get these errors:

39001: invalid argument value
39071: Value for NAME_EXPR is badly formed.
01756: quoted string not properly terminated

Yes, the max is 4000-6 I think. The string you provide is wrapped in 'IN (STRING)' or something like that. This was fixed in the latest patch set for 10 and is fixed in 11. If you don't have the latest fix, then you can always create a table, insert your names into this table, then use a name_expr filter (IN SELECT NAME FROM MY_TABLE) instead of listing all of the tables that you want exported.

5) I have no problems running the commandline client for dpexp for the entire database. Yet when I try to do just a fraction of the tables using the API, I get out of process memory errors. What am I doing wrong?

Maybe you can list your api calls and the errors that you are getting.

Thanks

Dean
99223
Thank, you Dean.

Everything is working now. Seems that errors in those expression strings will create all sorts of seemingly unrelated problems.

I patched up to 10.2.0.4, but I'm still having problems with strings that exceed 4000 bytes. I've changed to use a workaround as you suggested, but I'm wondering which exact patch set you are referring to that would fix this.
Dean Gagne-Oracle
Hi,

It is part of 10.2.0.4, but your compatibility needs to be 10.2.0.4. A compatibility setting of 10.2.0.3 may also work.

Dean
Norbert.Hein
Oracle 11.2.0.2

Hi,
I have a similare problem, cannot find the right way:

i have a first schema prod_schema on a PROD-Database
and a second schema test_schema on a TEST-Database
connected with database-link

and want to refresh the test_schema by using dbms_datapump in a procedure
over the db-link

That works in principle , in TABLE-Mode as well as in DUMP-Mode

but
when I use the job_mode = TABLE,
I cannot find a way to refresh my SEQUENCES ( as well as the PACKAGES, FUNCTIONS ) too
in oracle Support Note 1297557.1 I found an excample, but doesn't work:
i tryed:

dbms_datapump.metadata_filter (
handle => job_handle,
name => 'INCLUDE_PATH_EXPR',
value => 'IN (''TABLE'',''SEQUENCE'')');


dbms_datapump.metadata_filter (
handle => job_handle,
name => 'NAME_EXPR',
value => 'LIKE (''S_%'')',
object_type => 'SEQUENCE');

get back an error : ORA-39001: bad argument

How can I get SEQUENCES ( and PACKAGES ) in TABLE - Mode,
or is that doesn't work ?

--------------------------------------------------------------------------------
the second way is to use the SCHEMA - Mode,
but while the target-schema is in a second database in that matter the dump overwrites all of the target - schema,
include the db-links, special grants , java.sources and so on.

Is there a way to refresh the test_schema in SCHEMA-Mode with only update the tables, sequences, packages , indexes ?

-----
upd: see 9728703
and Note 1323411.1

글 수정: astramare
1 - 14
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Dec 9 2016
Added on Nov 10 2016
40 comments
24,942 views