1 2 Previous Next 15 Replies Latest reply on Jul 7, 2006 5:18 AM by Billy~Verreynne Go to original post
      • 15. Re: CURSORs - good or bad
        Billy~Verreynne
        > One is it seems two of you are contradictory and that
        might just be attributable to personal preference vs
        one way actually being better than another.

        Simply put. A cursor itself (the construct in the db engine) is not a bad thing. All SQLs fired off against winds up as cursors internally.

        What the "problem" is, is how explicit cursors (the client explicitly coding the opening, fetching and closing of the cursor) are used. They are often used for row-by-row processing, which is very inefficient.

        Simple example. Open a cursor for 'SELECT * FROM dept'. Inside that loop, coding a nested cursor loop that does a 'SELECT * FROM emp WHERE deptid = :0'.

        A SQL join is many times faster and superior.

        Which is why I said maximise SQL and minimize PL/SQL. Some (many?) developers still think of rows as "file based records" - and instead of using SQL to perform the data processing, they reduce SQL to a primitive I/O call to fetch and write data, and write the rest of the code in whatever programming language they use.

        > Personally, I tend toward pushing for a few large
        rollbacks and doing everything in a single update
        statement.

        Excellent. You are treating the business transaction as a business transaction in the database. Changing the business transaction into multiple commits can and does lead to data inconsistencies in the database.

        > Unfortunately that has gotten me in hot
        water when it brings a client's db to it's knees. The
        recent one started giving me ORA-01555 errors.

        As mentioned, this is usually the cause of incorrectly committing - like committing in a loop. The problem often being fetching rows across commits. E.g. Fetching rows from table EMP (read-only) and updating them - thus very quickly "aging" your read-only view of EMP resulting in Oracle no longer being able to provide the consistent snapshot (version) of what EMP was when you opened your read-only EMP cursor.

        In this example the solution is to lock EMP for updating - as that is exactly what the cursor loop is doing.

        > It got
        to where single statements were taking an hour or
        more so I was forced to embed commit statements.

        Fetching across commits are actually a violation of ANSI SQL Standards. And doing this "commit often" approach, can cause this - and in turn more snapshot too old errros.

        Also Oracle is very different than other RDBMS products as far as locking goes. A commt of a transaction that changed a million rows is as fast as committing a transaction that changed a single row. The overheads of a million row locks are the same as that of a single row - none. Other RDBMS approach locking very differently. Locks are expensive and a scarce resource. Because it is a scarce resource, locks are escalated to reduce the number of locks (e.g. row locks turn into page locks turn into exclusive table locks). None of these are issues in Oracle.

        All that Oracle asks is properly sized rollbacks. :-)

        > Second is I'm not sure I understand the fine line
        between what is processed as SQL and what is PL/SQL.

        Have a look at PL/SQL vs SQL

        > I have heard of Tom's book and it is on my short list
        of books to get. I will probably go ahead and order
        them today or tomorrow.

        Tom's one of the few Oracle authors that... well, I should better not say more as some other Oracle authors may just get upset. :-)
        1 2 Previous Next