5 Replies Latest reply on May 25, 2011 3:22 PM by Srini Chavali-Oracle

    Schema Export using DBMS_DATAPUMP is extremely slow

    Pyrocks
      Hi,

      I created a procedure that duplicates a schema within a given database by first exporting the schema to a dump file using DBMS_DATAPUMP and then imports the same file (can't use network link because it fails most of the time).

      My problem is that a regular schema datapump export takes about 1.5 minutes whereas the export using dbms_datapump takes about 10 times longer - something in the range of 14 minutes.

      here is the code of the procedure that duplicates the schema:
      CREATE OR REPLACE PROCEDURE MOR_DBA.copy_schema3 (
                                                source_schema in varchar2, 
                                                destination_schema in varchar2, 
                                                include_data in number default 0,
                                                new_password in varchar2 default null,
                                                new_tablespace in varchar2 default null
                                              ) as
        h   number;
        js  varchar2(9); -- COMPLETED or STOPPED
        q   varchar2(1) := chr(39);
        v_old_tablespace varchar2(30);
        v_table_name varchar2(30);
      
      BEGIN
         /* open a new schema level export job */
         h := dbms_datapump.open ('EXPORT',  'SCHEMA');
        
         /* attach a file to the operation */
         DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.NEXTVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
      
         /* restrict to the schema we want to copy */
         dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
         
         /* apply the data filter if we don't want to copy the data */
         IF include_data = 0 THEN
            dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
         END IF;
         
         /* start the job */
         dbms_datapump.start_job(h);
      
         /* wait for the job to finish */
         dbms_datapump.wait_for_job(h, js);
      
         /* detach the job handle and free the resources */
         dbms_datapump.detach(h);
      
      
         /* open a new schema level import job */
         h := dbms_datapump.open ('IMPORT',  'SCHEMA');
        
         /* attach a file to the operation */
         DBMS_DATAPUMP.ADD_FILE (h, 'COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL || '.DMP', 'LOCAL_DATAPUMP_DIR');
      
         /* restrict to the schema we want to copy */
         dbms_datapump.metadata_filter (h, 'SCHEMA_LIST',q||source_schema||q);
      
         /* remap the importing schema name to the schema we want to create */      
         dbms_datapump.metadata_remap(h,'REMAP_SCHEMA',source_schema,destination_schema);
        
         /* remap the tablespace if needed */
         IF new_tablespace IS NOT NULL THEN
            select default_tablespace
            into v_old_tablespace
            from dba_users
            where username=source_schema;
            
            dbms_datapump.metadata_remap(h,'REMAP_TABLESPACE', v_old_tablespace, new_tablespace);
         END IF;
        
         /* apply the data filter if we don't want to copy the data */
         IF include_data = 0 THEN
            dbms_datapump.data_filter(h,'INCLUDE_ROWS',0);
         END IF;
      
         /* start the job */
         dbms_datapump.start_job(h);
      
         /* wait for the job to finish */
         dbms_datapump.wait_for_job(h, js);
      
         /* detach the job handle and free the resources */
         dbms_datapump.detach(h);
        
         /* change the password as the new user has the same password hash as the old user,
         which means the new user can't login! */
         execute immediate 'alter user '||destination_schema||' identified by '||NVL(new_password, destination_schema);
         
         /* finally, remove the dump file */
         utl_file.fremove('LOCAL_DATAPUMP_DIR','COPY_SCHEMA_EXP' ||copy_schema_unique_counter.CURRVAL|| '.DMP');
      
      /*EXCEPTION
         WHEN OTHERS THEN    --CLEAN UP IF SOMETHING GOES WRONG
            SELECT t.table_name
            INTO v_table_name
            FROM user_tables t, user_datapump_jobs j
            WHERE t.table_name=j.job_name
            AND j.state='NOT RUNNING';
            
            execute immediate 'DROP TABLE  ' || v_table_name || ' PURGE';
            
            RAISE;*/
      end copy_schema3;
      /
      The import part of the procedure takes about 2 minutes which is the same time a regular dp import takes on the same schema.
      If I disable the import completely it (the export) still takes about 14 minutes.
      Does anyone know why the export using dbms_datapump takes so long for exporting?

      thanks.
        • 1. Re: Schema Export using DBMS_DATAPUMP is extremely slow
          Srini Chavali-Oracle
          Pl post details of OS and database versions. I would suggest you enable trace for this code to see what exactly is taking up the 14 minutes.

          How To Use The New 11g Events++ Syntax For Easier SQL Tracing Of Datapump Operations? (Doc ID 813737.1)
          Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump (Doc ID 286496.1)
          Recommended Method for Obtaining 10046 trace for Tuning (Doc ID 376442.1)

          HTH
          Srini
          • 2. Re: Schema Export using DBMS_DATAPUMP is extremely slow
            Pyrocks
            Hi,

            Thanks a lot for the reply.
            I'm working with 11.2.0.1 on Windows 2003 R2 32 bit.

            I will take a look at these documents and see if I can find anything useful...
            • 3. Re: Schema Export using DBMS_DATAPUMP is extremely slow
              Pyrocks
              Hi,

              I've used the Events++ syntax from Doc ID 813737.1 and generated 2 trace files:
              instanceid_dm00_8004.trc = 2.5MB
              instanceid_dw00_5112 = 80(!) MB

              Is there anything in particular I should look for in those trace files?
              Will I see related issues if I run an AWR report?

              Thanks.
              • 4. Re: Schema Export using DBMS_DATAPUMP is extremely slow
                Pyrocks
                Hi,

                I did a tkprof on the DM trace file and this is what I found:
                Trace file: D:\Oracle\diag\rdbms\instanceid\instanceid\trace\instanceid_dm00_8004.trc
                Sort options: prsela  execpu  fchela  
                ********************************************************************************
                count    = number of times OCI procedure was executed
                cpu      = cpu time in seconds executing 
                elapsed  = elapsed time in seconds executing
                disk     = number of physical reads of buffers from disk
                query    = number of buffers gotten for consistent read
                current  = number of buffers gotten in current mode (usually for update)
                rows     = number of rows processed by the fetch or execute call
                ********************************************************************************
                
                SQL ID: bjf05cwcj5s6p
                Plan Hash: 0
                BEGIN :1 := sys.kupc$que_int.receive(:2); END;
                
                
                call     count       cpu    elapsed       disk      query    current        rows
                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                Parse        3      0.00       0.00          0          0          0           0
                Execute    229      1.26     939.00         10       2445          0          66
                Fetch        0      0.00       0.00          0          0          0           0
                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                total      232      1.26     939.00         10       2445          0          66
                
                Misses in library cache during parse: 0
                Optimizer mode: ALL_ROWS
                Parsing user id: SYS   (recursive depth: 2)
                
                Elapsed times include waiting on following events:
                  Event waited on                             Times   Max. Wait  Total Waited
                  ----------------------------------------   Waited  ----------  ------------
                  wait for unread message on broadcast channel
                                                                949        1.01        936.39
                ********************************************************************************
                what does "wait for unread message on broadcast channel" mean and why did it take 939 seconds (more than 15 minutes) ?
                • 5. Re: Schema Export using DBMS_DATAPUMP is extremely slow
                  Srini Chavali-Oracle
                  This may be a bug. See

                  Oracle Data Pump Export, EXPDP, is Slow on Partition Tables (Doc ID 1300895.1)

                  Another related MOS Doc may be

                  Expdp Hangs During Exporting Data Due to a Lock Situation (Doc ID 756834.1)

                  HTH
                  Srini