1 2 Previous Next 15 Replies Latest reply on Jun 16, 2011 10:37 AM by SomeoneElse

    Deleting large number of records

    454379
      Hello all,

      The problem of deleting large amount of data from database appeared several times in the forum, but none of them reflects my problem.

      I have a table which is appended of ca. 1 700 000 records every day. Every day should also be oldest records deleted, ca. 1 700 000 of them. Total number of records in this table is constant, ca. 0,6 billion. There is no dependency between this table and any other Oracle object, and the only operations performed on this table is adding new records, removing oldest and selecting them. One column of this table is indexed, so adding of removing data takes some time to update index. But it is not the issue.

      The problem is in deleting oldest records. When I simply issue a DELETE command I get a message that undo tablespace is to small, transaction rollback will not be possible and therefore records cannot be deleted. This is normal and expected effect, because Oracle general philosophy is full data recovery. But in my particular case this recovery policy is not needed. I simply want to irrecoverably delete large amount of data. If something happens during transaction then this is not a problem, because such failed operation can be easily corrected on application (not Oracle) level.

      I couldn't find in the web how to "switch off" rollback, undo, or whatever it is called. Oracle gurus told me that probably it is not possible and the only solution is to increase undo tablespace. But I can't believe that there is no such mechanism in Oracle that allows ultimate and irrecoverable data deletion. Deleting records in smaller sets, by 100 000 for example, takes too long because indexes must be updated after every commit. In the ideal case I would like oldest records to be deleted, then added new ones, and finally updated index and committed the whole operation.

      Do you have any idea how to achieve this? Or should I organize my data in a different way?

      Any hints will be greatly appreciated.

      Greg
        • 1. Re: Deleting large number of records
          Nicolas.Gasparotto
          Hi,

          You need to use partitioned table.
          Create a table with partitions by day (or by week, by month - for period which you want delete/drop), and instead of delete rows, you can drop the old partition which you don't want to keep. You can drop or place his tablespace in offline mode, like this you keep data, wihtout any access.
          One more thing, you need to anticipate on the following period with creating the following partition.

          Nicolas.

          PS1 : you can see some sample here http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_73a.htm
          PS2 : for partition usage, you need to have Entreprise Edition with an additional paid option.

          Message was edited by:
          N. Gasparotto
          • 2. Re: Deleting large number of records
            341279
            Hi, I agree with Nicolas on using partitionning.

            But , first simply ask your prefered DBA to size the UNDO tablespace according to your buissiness needs.
            You can't switch off rollback as they are needed to statisfy with the ACID test.
            • 3. Re: Deleting large number of records
              ScottZheng
              using partitionning is a good way.

              if you don't want to use partitionning, and number of deleting data much large than keeping data, for example, 10,000,000 rows in this table, and 8,000,000 rows needs to be deleted. You can do the steps as following:
              1. create new table
              create table <newTable> as select * from <oldTable> where ...
              2. drop old table
              3. rename <newTable> to <oldTable>
              4. create the index.

              This way just uses a little of UNDO_TS.
              • 4. Re: Deleting large number of records
                164995
                There are two ways to solve this problem :-
                CASE 1:-
                Create new table like user477610 mentioned.

                CASE 2:-
                DELETE FROM myfata$$ WHERE rownum < 100000; --Start with a small number and gradually increase to see how many rows you can delete in one shot without hitting the UNDO problem.

                Most efficient in your case is CASE 1, determine a value to which the table can grow up to and then take a maintainence window and knock the extra rows off.
                • 5. Re: Deleting large number of records
                  454379
                  Hello again,

                  Thank you all for your replies. It seems that deleting large amount of data in Oracle is a challenge :).

                  The idea of partitioned table seems to be the best solution. I will implement it.

                  Thanks again,

                  Greg
                  • 6. Re: Deleting large number of records
                    JayMunshi
                    Hello,

                    Like all above Partitioning is the feature for you. You can just truncate your partition. Truncate does not generate a rollback. Hence, no issues with increasing your retention etc.

                    regards,
                    Jay Munshi
                    http://jaydba.tripod.com
                    • 7. Re: Deleting large number of records
                      JayMunshi
                      And I think you dont have a Problem with Oracle but with your design. 0.6 billion rows and no partitions???? :-)


                      regards,
                      Jay Munshi
                      http://jaydba.tripod.com
                      • 8. Re: Deleting large number of records
                        571795
                        relating to undo table space :

                        you can omit undo management facility through uncommand the line UNDO_MANAGEMENT=AUTO to #UNDO_MANAGEMENT=AUTO in init.ora or your pfile.

                        after the change of this parameter data base will be restart must.

                        delete large number of data from table with the help of TRUNCATE command which is not required any rollback segment or undo table space, because its auto commit process.

                        if you not habituated with truncate command, then you go to PLSQL procedure syntax and delete one record at a time and commit with the help of cursor.

                        I think your purpose is solve.
                        • 9. Re: Deleting large number of records
                          622219
                          I've been working on different means to delete large amounts of data.

                          The undo/rollback problem is easily worked around by using a PL/SQL block to open a cursor and delete a number of rows at a time. Just create a loop that deletes one row at a time, and commits every n rows.

                          It seems a bigger challenge when you want to delete a significant amount of data without disrupting transactions. So far I'm using a shell script to loop a PL/SQL block that deletes n rows and exits, waiting a couple of seconds between each iteration to let waiting transactions get through.
                          • 10. Re: Deleting large number of records
                            Tanel Poder
                            The problem is in deleting oldest records. When I
                            simply issue a DELETE command I get a message that
                            undo tablespace is to small, transaction rollback
                            will not be possible and therefore records cannot be
                            What is the actual error message text? (the ORA- code?)
                            deletion. Deleting records in smaller sets, by 100
                            000 for example, takes too long because indexes must
                            be updated after every commit.
                            This is a myth. The indexes are updated during DML execution anyway, regardless of the commit. Deleting 100000 rows at a time would probably not make your delete much slower at all (as the delete CPU & physical IO time will be much more significant than the 17 commits would take).

                            --
                            Tanel Poder
                            http://blog.tanelpoder.com
                            • 11. Re: Deleting large number of records
                              Tanel Poder
                              Note that you need to buy partitioning licenses for Oracle if you already don't have 'em
                              • 12. Re: Deleting large number of records
                                26741
                                Re the statement
                                "you can omit undo management facility"
                                Undo still has to be managed. If not with AUTO, manually with ROLLBACK SEGMENTs
                                created by the DBA in [preferred architecture] a dedicated Tablespace.

                                Re the statement
                                "delete large number of data from table with the help of TRUNCATE command"
                                The TRUNCATE command doesn't delete "large number". It deletes "ALL" records --
                                every single record in the table. There is no rollback for a Truncate. Use TRUNCATE
                                only when you are sure that you want really empty the table.

                                Re the statement
                                "PLSQL procedure syntax and delete one record at a time and commit with the help of cursor."
                                With Undo_Retention, undo space requirements can still be slightly high.
                                However, the major issues with this method are :
                                a. Increased "log file sync" waits -- every commit waits on the LGWR
                                b. Very likely "fetch across commit" issues resulting in ORA-1555 errors unless you
                                really know how to write this
                                c. Increased redo generation because every commit generates it's own redo record.
                                This recommendation will SLOW SLOW SLOW the deletes.
                                • 13. Re: Deleting large number of records
                                  Wilson Shen
                                  Hi All,

                                  I have another Idea .

                                  1. create global temporary table <newTable> as select * from <oldTable> where ...
                                  2. truncate <oldTable>
                                  3. alter <oldTable> nologging -- If it is logging !
                                  4. insert /*Append*/ into <oldTable> as select * from <newTable>
                                  5. drop <newTable>
                                  6. alter <oldTable> logging -- If oldTable was logging mode !

                                  Regards
                                  Wilson
                                  http://www.wilsonshen.idv.tw
                                  • 14. Re: Deleting large number of records
                                    869321
                                    I need to perform this delete activity. kindly help to me.



                                    ACTUAL Data in table    :61,86,702
                                    Need to DELETE          :13,40,224
                                    Need data in table        :48,46,478


                                    Delete Query:
                                    delete from table_name where field1 IS NULL and field2 IS NULL and field3 between '30-JAN-2000' and '30-DEC-2010'

                                    How to do this delete query using CTAS method.
                                    1 2 Previous Next