This discussion is archived
9 Replies Latest reply: Apr 3, 2013 9:26 AM by 1000827 RSS

impdp network_link keeps filling the temp tablespace

936749 Newbie
Currently Being Moderated
Hi,
I'm doing a impdp over network_link and it fills up the temp tablespace.
Why do it user temp at all? and now I've added a couple of datafiles and it's now at 160GB of size.
The tables that are inserted a rather large and I'm doing parallel=4 and it's content=data_only
Regards
933746
  • 1. Re: impdp network_link keeps filling the temp tablespace
    936749 Newbie
    Currently Being Moderated
    anyone?
  • 2. Re: impdp network_link keeps filling the temp tablespace
    Dean Gagne Expert
    Currently Being Moderated
    It uses temp or undo tablespaces in case the import fails and you need to rollback the insert. I think temp is also used for sorting. Which database is using up the temp space, the source or target?

    Dean
  • 3. Re: impdp network_link keeps filling the temp tablespace
    936749 Newbie
    Currently Being Moderated
    It's the target.
    Regards
  • 4. Re: impdp network_link keeps filling the temp tablespace
    Dean Gagne Expert
    Currently Being Moderated
    My guess is it is for rollback.

    Dean
  • 5. Re: impdp network_link keeps filling the temp tablespace
    964829 Newbie
    Currently Being Moderated
    Hi,

    we are having the same problem. Is there any documentation that describes this behavior? I also found bug 10396489 which also doesnt give some background to this "error". Is there any "workaround"?

    Thanks in advance
  • 6. Re: impdp network_link keeps filling the temp tablespace
    964829 Newbie
    Currently Being Moderated
    No one?
  • 7. Re: impdp network_link keeps filling the temp tablespace
    966484 Newbie
    Currently Being Moderated
    Hello:

    Theis not a BUG....as the Slave Processes used in the Parallel=? parameter uses up the TEMP tablespace....the commit is performed at the end of the import....for further details please refere to problem and solution in the following note from Oracle

    DataPump Network Mode Import Consumes Lots Of Temporary Segments In TEMP Tablespace *[ID 1499422.1]*

    thanx
  • 8. Re: impdp network_link keeps filling the temp tablespace
    966484 Newbie
    Currently Being Moderated
    Hello:

    Theis not a BUG....as the Slave Processes used in the Parallel=? parameter uses up the TEMP tablespace....the commit is performed at the end of the import....for further details please refere to problem and solution in the following note from Oracle

    DataPump Network Mode Import Consumes Lots Of Temporary Segments In TEMP Tablespace *[ID 1499422.1]*

    thanx
  • 9. Re: impdp network_link keeps filling the temp tablespace
    1000827 Newbie
    Currently Being Moderated
    Hopefully I can answer this issue.
    OK - I have not used Data Pump before and we are moving to new servers in a new data center and upgrading from 10G to 11G. It was suggested I use Data Pump with a network link to move the data (we dont have the disk space to do exports). The database I was trying to move first was for an application we no longer use and the data is held for audits, so we have minimized the amount of disk space allocated for use by this database. I started running the job and immediately got hit by errors:
    ORA-39171: Job is experiencing a resumable wait.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    So I allocated more disk space for the TEMP tablespace and got the same errors again. I kept getting the error and kept allocating space until the TEMP tablespace was at 30 GB and I had used up all disk space allocated for this database. I did some searches and came to this thread looking for answers and found none. So I did some research, trial and error, and eventually was successful in moving the data with a smaller TEMP tablespace - here is how.

    Some things I noticed about how Data Pump works:
    1) It appears Data Pump (at least with the network link) will copy the largest table first, smallest table last.
    2) The job will sit forever waiting for more temp space when it runs out (check the logs)
    3) The temp space is used to build indexes on the tables.

    Here is what I did to get around this issue:
    First I did an exp of the database with rows=n (no data, just schema structure). Then I did an imp for the schema to build all tables and indexes. (I know exp is no longer valid with 11G - but I did it in 10G.)
    Then I ran the following SQL to determine tables with the largest total index space used:
    SELECT owner, table_name, gigs
    FROM (SELECT owner, table_name, (ROUND(sum(bytes)/107374182.4)/10) Gigs
    FROM (SELECT i.table_name, i.owner, s.bytes
    FROM dba_indexes i, dba_segments s
    WHERE s.segment_name = i.index_name
    AND s.owner = i.owner
    AND s.segment_type = 'INDEX'
    UNION ALL
    SELECT l.table_name, l.owner, s.bytes
    FROM dba_lobs l, dba_segments s
    WHERE s.segment_name = l.index_name
    AND s.owner = l.owner
    AND s.segment_type = 'LOBINDEX'
    ) s1
    WHERE owner IN UPPER('&owner')
    GROUP BY owner, table_name
    ) s2
    WHERE gigs > 2 -- No need to see less than 2 gig
    ORDER BY gigs desc;

    OWNER TABLE_NAME GIGS

    SCOTT PY 24.4
    SCOTT PD 23.5
    SCOTT PT 12.8
    SCOTT PH 4.3
    SCOTT PC 3.7

    In my case I had 2 tables with total indexes over 20 gig (24.4 and 23.5) and one table at 12.8 gig. The next largest was 4.3 gig.

    So I deleted all the indexes for the top 3 tables (except for Primary Keys). The largest the TEMP tablespace got when I ran again was 6 gig and the import worked. Using SQL Plus I recreated the indexes I had deleted (TEMP never got over 6 gig in size) and everything is now well.

    BTW - if you want to know the order tables will be imported, use the following to see order by actual table size:
    SELECT owner, table_name, gigs
    FROM (SELECT owner, table_name, (ROUND(sum(bytes)/107374182.4)/10) Gigs
    FROM (SELECT segment_name table_name, owner, bytes
    FROM dba_segments
    WHERE segment_type = 'TABLE'
    UNION ALL
    SELECT l.table_name, l.owner, s.bytes
    FROM dba_lobs l, dba_segments s
    WHERE s.segment_name = l.segment_name
    AND s.owner = l.owner
    AND s.segment_type = 'LOBSEGMENT'
    ) s1
    WHERE owner IN UPPER('&owner')
    GROUP BY owner, table_name
    ) s2
    WHERE gigs > 1
    ORDER BY gigs desc;

    OWNER TABLE_NAME GIGS

    SCOTT PD 13.1
    SCOTT PY 12.5
    SCOTT PT 11.3
    SCOTT PH 2
    SCOTT PM 1.4
    SCOTT PC 1

    Warning: I don't "KNOW" that this is gospel, but this is what I observed.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points