1 2 3 Previous Next 42 Replies Latest reply: Aug 3, 2008 1:14 PM by riyaj Go to original post RSS
      • 30. Re: Import running Very Slow!!!
        Read about latches here:


        Otherwise in short from asktom:

        A latch is a serialization mechanism. In order to gain access to a shared data
        structure, you must "latch" that structure. that will prevent others from modifying it
        while you are looking at it or modifying it your self. It is a programming tool.

        A latch is a permanent thing in Oracle -- when the Oracle developers ship the data, the
        entire universe of latches is known. It is compiled into the database. The number of
        some latches vary - the types of latches are fixed into the code itself.

        There is one type of latch -- they are just a serialization device. Many things are
        PROTECTED by latches -- the library cache, the buffer cache, and so on.


        About import, if you won't be able to break it and run again then not much can be done, i think.

        The only thing you can do is let it run to completion.

        Amardeep Sidhu
        • 31. Re: Import running Very Slow!!!
          Thanks again....

          U mean, if i can break and run it again, then the import would be faster???

          Rite now, i'm jus waiting for it's completion......


          • 32. Re: Import running Very Slow!!!

            I meant if you could break it and as Justin said, pre-create the table in some locally managed tablespace and then start it again.

            Amardeep Sidhu
            • 33. Re: Import running Very Slow!!!
              okies... :-)

              Clear now....

              Any idea how much time will it take for 1.1 milion extents to be allocated???

              A rough idea will do....


              • 34. Re: Import running Very Slow!!!
                Roughly, you're looking at approximately the same amount of time to allocate 1.1 million extents as to deallocate them.

                Of course, since you are monitoring the load, you should be able to figure out the number of extents being allocated every minute/ hour/ etc. and use that rate to extrapolate to a more accurate time frame.

                • 35. Re: Import running Very Slow!!!

                  It'll take probably 40 days at this speed to complete the import.

                  48 hrs gone and only 1.3 GB of data imported.

                  Size of table is 44 GB... :-(

                  So, we will be probably cancelling the import and a new strategy will be planned accordingly.

                  For new strategy these are the following constraints...

                  1.We cannot go for Locally Managed Tablespace. We have to import in the same Dictionary Managed Tablespace.

                  2.We have to complete the import in minimum 4 days.

                  My command for Import was as follows:-
                  imp schema/passwd
                  file=/oracle/XX/97/imp/TABLENB.1.exp,/oracle/XX/98/imp/TABLENB.2.exp,/oracle/XX/99/imp/TABLENB.3.exp log=/oracle/XX/97/exp/TABLENB.imp.log tables=TABLENB feedback=10000 indexes=N ignore=Y

                  Any changes in the parameters that can be done, according to u.....

                  Waiting for ur reply...

                  • 36. Re: Import running Very Slow!!!
                    There is very little you can do in your import command to change anything. The problem here is that the table definition is horribly inappropriate-- you'll have to change the table's storage clause. A 44 GB table spread over 1.1 million extents implies that your average extent size is roughly 42 kb, which is just ridiculously undersized.

                    You'll have to
                    - Create the table outside of the import. As part of this CREATE TABLE command, you'll specify appropriate storage clauses (for example, INITIAL = NEXT = 100 MB and PCTINCREASE = 0).
                    - Then you run the import with ignore=y to ignore the fact that the table already exists.

                    • 37. Re: Import running Very Slow!!!

                      Tell us what you are trying to achieve, business wise.
                      If it is 44GB table, then it is probably faster to use transportable tablespaces to migrate data from production to development. Is it something that you can consider?
                      1. For the import, while import is running, can you create a statspack report and print top wait events, say for 30 minutes or so. We can't possibly know why import is slow without knowing where the time is spent.
                      2. Are you importing locally? Meaning, is your dump file and database in the same database server. Going over network is very costly and import will slow down due to excessive network calls.
                      3. Is there any LOB columns in this table? Apologies if you have posted this table structure already.
                      4. You could also do little better with import too.
                      Specify buffer=104857600 commit=Y in your import. Default import buffer is probably too small and can cause unnecessary performance issues. But, if you are importing remotely, then this is a moot point though.

                      blog: http://orainternals.wordpress.com
                      • 38. Re: Import running Very Slow!!!
                        Hi Riyaj,

                        Business wise, we are trying to import data of a single table of Production to Development. We first truncated the table that had obsolete data in it and now trying to import data into that table.
                        1.Can you create a statspack report and print top wait events, say for 30 minutes or so.
                        No, we cannot go for creating a statspack report.(It's a long process to get approval for such a change)
                        2.Yeah, we are importing locally.
                        3.No, there ain't any LOB columns in this table.
                        The table structure is as follows:-

                        Name Null? Type
                        ----------------------------------------- -------- ----------------------------
                        MANDT NOT NULL VARCHAR2(3)
                        VBELV NOT NULL VARCHAR2(10)
                        POSNV NOT NULL VARCHAR2(6)
                        VBELN NOT NULL VARCHAR2(10)
                        POSNN NOT NULL VARCHAR2(6)
                        VBTYP_N NOT NULL VARCHAR2(1)
                        RFMNG NOT NULL NUMBER(15,3)
                        MEINS NOT NULL VARCHAR2(3)
                        RFWRT NOT NULL NUMBER(15,2)
                        WAERS NOT NULL VARCHAR2(5)
                        VBTYP_V NOT NULL VARCHAR2(1)
                        PLMIN NOT NULL VARCHAR2(1)
                        TAQUI NOT NULL VARCHAR2(1)
                        ERDAT NOT NULL VARCHAR2(8)
                        ERZET NOT NULL VARCHAR2(6)
                        MATNR NOT NULL VARCHAR2(18)
                        BWART NOT NULL VARCHAR2(3)
                        BDART NOT NULL VARCHAR2(2)
                        PLART NOT NULL VARCHAR2(1)
                        STUFE NOT NULL VARCHAR2(2)
                        LGNUM NOT NULL VARCHAR2(3)
                        AEDAT NOT NULL VARCHAR2(8)
                        FKTYP NOT NULL VARCHAR2(1)
                        BRGEW NOT NULL NUMBER(15,3)
                        GEWEI NOT NULL VARCHAR2(3)
                        VOLUM NOT NULL NUMBER(15,3)
                        VOLEH NOT NULL VARCHAR2(3)
                        FPLNR NOT NULL VARCHAR2(10)
                        FPLTR NOT NULL VARCHAR2(6)
                        RFMNG_FLO NOT NULL FLOAT(126)
                        RFMNG_FLT NOT NULL FLOAT(126)
                        CMETH VARCHAR2(1)
                        ABGES FLOAT(126)
                        KZBEF VARCHAR2(1)
                        NTGEW NUMBER(13,3)
                        SOBKZ VARCHAR2(1)
                        SONUM VARCHAR2(16)
                        VRKME VARCHAR2(3)
                        4.As per metalink:-
                        COMMIT=Y doesn't help to speed up the import because this means that for tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row.

                        Justin please have a look:

                        My Question:

                        Right now, the Database is in Archive Log Mode.

                        Does stopping the import now and putting the Database in No-Archive will help the import speed up???

                        • 39. Re: Import running Very Slow!!!
                          Hi Nick

                          Where did you hear that date columns are not imported with array inserts? AFAIK, only lob columns suffers from single row processing with import tool.

                          I just looked at this table structure. You should really test with 100MB buffer and check performance. You don't need to kill current import, but measure it in another database and see whether it is worthwhile to kill this and restart. Also, if you don't specify commit=Y, then chances are that you may run out of undo space eventually, whole transaction is still active, right?

                          BTW, installing statspack isn't such a huge deal. But, I can understand if there is stricter change control in place. Instead use snapper tool from www.tanelpoder.com and check for the session importing. snapper does not create any objects and prints session level differences between two snapshots.

                          blog: http://orainternals.wordpress.com
                          • 40. Re: Import running Very Slow!!!
                            Hi Riyaj,
                            Where did you hear that date columns are not imported with array inserts?
                            According to Oracle Metalink, they've told that COMMIT=Y doesn't help to speed up the import because this means that for tables containing LONG, LOB, BFILE,
                            REF, ROWID, UROWID, or DATE columns, array inserts are not done.

                            We have sufficient free space in Undo tablespace, so that's not an issue.


                            • 41. Re: Import running Very Slow!!!
                              Running in ARCHIVELOG mode probably has very little to do with the performance you're seeing. If your system is totally I/O bound, there may be some benefit from eliminating the I/O of the archiver process, but running in ARCHIVELOG mode is generally done for substantial reasons (i.e. ability to recover, ability to replicate data via technologies like Streams, etc). I'd strongly advise you not to disable ARCHIVELOG mode.

                              As I've said before, you need to change the storage attributes of the table. Trying to create 1.1 million extents in a DMT so that you can store 44 GB of data is just not reasonable.

                              • 42. Re: Import running Very Slow!!!

                                I tested buffer parameter for tables with date column and blogged about it http://orainternals.wordpress.com/2008/08/03/import-performance-does-import-of-date-column-resorts-to-single-row-inserts-like-lob-columns/. Tables with date columns uses array inserts and performance could be improved specifying buffer parameter for your import also.

                                As we know, LOB columns are imported with single row insert statements.

                                blog: http://orainternals.wordpress.com
                                1 2 3 Previous Next