This discussion is archived
9 Replies Latest reply: Dec 24, 2008 11:49 AM by CharlesHooper RSS

enqueue and logfile parallel write

591309 Explorer
Currently Being Moderated
Hi all,
9.0.1.
I am encountering enqueue and log file parallel write waits.
HOw should i go for troubleshooting?This is a small snapshot of the statspack report.
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
               Buffer Cache:       688M      Std Block Size:         8K
           Shared Pool Size:       512M          Log Buffer:     3,072K

Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:            668,119.72          1,246,573.61
              Logical reads:              4,287.71              7,999.98
              Block changes:              5,197.72              9,697.87
             Physical reads:                643.39              1,200.44
            Physical writes:                 41.31                 77.07
                 User calls:                 26.69                 49.79
                     Parses:                  4.44                  8.28
                Hard parses:                  0.02                  0.05
                      Sorts:                  0.90                  1.67
                     Logons:                  0.04                  0.08
                                                              31,1           0%
           Executes:                 11.90                 22.20
               Transactions:                  0.54

  % Blocks changed per Read:  121.22    Recursive Call %:    22.37
 Rollback per transaction %:   27.33       Rows per Sort:    23.12

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   85.00    In-memory Sort %:  100.00
            Library Hit   %:   99.84        Soft Parse %:   99.45
         Execute to Parse %:   62.71         Latch Hit %:  100.00
Parse CPU to Parse Elapsd %:   78.26     % Non-Parse CPU:  100.00

 Shared Pool Statistics        Begin   End
                               ------  ------
             Memory Usage %:    9.43    9.50
    % SQL with executions>1:   57.75   57.36
  % Memory for SQL w/exec>1:   57.82   57.34

Top 5 Wait Events
~~~~~~~~~~~~~~~~~                                            Wait     % Total
Event                                               Waits  Time (s)   Wt Time
                                                              54,16          2%
-------------------------------------------- ------------ ----------- -------
enqueue                                                83         208   68.33
resmgr:waiting in end wait                            265          25    8.18
log file parallel write                             1,939          22    7.25
resmgr:waiting in check                                 8          17    5.54
db file sequential read                            15,185          16    5.18
          -------------------------------------------------------------
^LWait Events for DB: ORCL  Instance: orcl  Snaps: 6 -7
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts  Time (s)    (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
enqueue                                83         58        208   2509      0.1
resmgr:waiting in end wait            265          0         25     94      0.4
log file parallel write             1,939          0         22     11      3.2
resmgr:waiting in check                 8          0         17   2112      0.0
                                                              76,16          4%
log file parallel write             1,939          0         22     11      3.2
resmgr:waiting in check                 8          0         17   2112      0.0
db file sequential read            15,185          0         16      1     24.9
db file scattered read            103,312          0          7      0    169.1
log file sync                         430          0          3      6      0.7
control file parallel write           456          0          3      6      0.7
log file sequential read              714          0          2      3      1.2
process startup                        50          0          1     19      0.1
log buffer space                        3          0          1    235      0.0
latch free                             28         28          0     11      0.0
log file switch completion              8          0          0     23      0.0
single-task message                    50          0          0      2      0.1
control file sequential read          428          0          0      0      0.7
undo segment extension                912        911          0      0      1.5
log file single write                  14          0          0      1      0.0
buffer busy waits                      97          0          0      0      0.2
SQL*Net more data to client            29          0          0      0      0.0
resmgr:waiting in check2                2          0          0      1      0.0
db file parallel write                462        462          0      0      0.8
SQL*Net break/reset to clien            8          0          0      0      0.0
async disk IO                         196          0          0      0      0.3
direct path read                       84          0          0      0      0.1
                                                              96,16          6%
direct path write                      84          0          0      0      0.1
LGWR wait for redo copy                 1          0          0      0      0.0
virtual circuit status             27,450         50      3,742    136     44.9
SQL*Net message from client        29,376          0      1,570     53     48.1
SQL*Net more data from clien            8          0          0     39      0.0
SQL*Net message to client          29,417          0          0      0     48.1
          -------------------------------------------------------------
^LBackground Wait Events for DB: ORCL  Instance: orcl  Snaps: 6 -7
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts  Time (s)    (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file parallel write             1,939          0         22     11      3.2
control file parallel write           456          0          3      6      0.7
log file sequential read              714          0          2      3      1.2
process startup                        50          0          1     19      0.1
rdbms ipc reply                       127          0          0      0      0.2
control file sequential read          392          0          0      0      0.6
latch free                              2          2          0     11      0.0
                                                              95,16          7%
log file single write                  14          0          0      1      0.0
db file scattered read                  5          0          0      4      0.0
db file sequential read                 7          0          0      2      0.0
buffer busy waits                      28          0          0      0      0.0
db file parallel write                462        462          0      0      0.8
async disk IO                         196          0          0      0      0.3
direct path read                       84          0          0      0      0.1
direct path write                      84          0          0      0      0.1
LGWR wait for redo copy                 1          0          0      0      0.0
rdbms ipc message                   2,931      1,082      4,306   1469      4.8
pmon timer                            416        410      1,109   2666      0.7
smon timer                             43          0      1,105  25693      0.1
          -------------------------------------------------------------
^LSQL ordered by Gets for DB: ORCL  Instance: orcl  Snaps: 6 -7
-> End Buffer Gets Threshold:   10000
-> Note that resources reported for PL/SQL includes the resources used by
   all SQL statements called within the PL/SQL code.  As individual SQL
   statements are also reported, it is possible and valid for the summed
   total % to exceed 100

                                                     CPU      Elapsd
  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value--------------- ------------ -------------- ------ -------- --------- ----------                                                           
   3,301,423           91       36,279.4   67.5     0.01      0.01 4086952936UPDATE IPAO.EMP_MAST SET OSCAL_CODE=:b1 WHERE :b1 < 600

        902,040           30       30,068.0   18.5     0.00      0.00 2333288512UPDATE IPAO.MON_ACT SET CHEQUE_NO=:b1,CHEQUE_DT=:b2,CHEQUE_AMT=:
b3 WHERE V_NO = :b4  AND V_DATE = :b5

        251,558            1      251,558.0    5.1     0.63      0.64 2852308902SELECT P.EMP_CODE,BASIC,DA_ADMISS,PAY_COMM,PER_PAY,SPE_PAY,ADD_P
AY,STA_PAY,PEN_PAY,NP_ALLW,SAL_ARR,OTH_ARR,EX_PAY_DED,GPF,ATTEND
,SUS_CODE,SUS_DAYS,STOP_PAY,SOC_DED,LICRD_DED,DP,E.DELETE_FLAG,E
.OP_CODE,E.PC_STATUS,E.SCAL_CODE   FROM IPAO.MON_PAY P,IPAO.EMP_
MAST E  WHERE P.EMP_CODE = E.EMP_CODE  AND P.TREA_CODE = :b1  AN

         48,064           32        1,502.0    1.0     0.60      0.60 1363997853SELECT CHE_NO   FROM CHEQ  WHERE CHE_NO = :b1

         37,123            1       37,123.0    0.8     0.00      0.00 1842892218DECLARE    outfile UTL_FILE.FILE_TYPE;    m_op         char(2) :
= '01';    m_month      char(3) := 'DEC';    m_year       char(4
) := '2008';    m_select     char(1) := 'P';    m_ddo        cha
r(4) := '2536';    cursor c1 is                 select emp_code,
                                                              162,16        11%
Instance Activity Stats for DB: ORCL  Instance: orcl  Snaps: 6 -7

Statistic                                      Total     per Second    per Trans--------------------------------- ------------------ -------------- ------------CPU used by this session                      11,258            9.9         18.4CPU used when call started                    11,258            9.9         18.4CR blocks created                              5,762            5.1          9.4DBWR checkpoint buffers written               47,006           41.2         76.9DBWR checkpoints                                   7            0.0          0.0DBWR transaction table writes                     77            0.1          0.1DBWR undo block writes                        40,408           35.5         66.1SQL*Net roundtrips to/from client             58,503           51.3         95.8background checkpoints completed                   7            0.0          0.0background checkpoints started                     7            0.0          0.0background timeouts                            1,103            1.0          1.8buffer is not pinned count                 1,423,293        1,248.5      2,329.5buffer is pinned count                       441,038          386.9        721.8bytes received via SQL*Net from c          3,714,390        3,258.2      6,079.2bytes sent via SQL*Net to client      10,470,217,793    9,184,401.6 ############                                                              633,16        53%
calls to get snapshot scn: kcmgss             14,308           12.6         23.4calls to kcmgas                               68,754           60.3        112.5calls to kcmgcs                                  455            0.4          0.7change write time                              1,480            1.3          2.4cleanouts and rollbacks - consist                121            0.1          0.2cleanouts only - consistent read               1,865            1.6          3.1cluster key scan block gets                       41            0.0          0.1cluster key scans                                  9            0.0          0.0commit cleanout failures: buffer                 479            0.4          0.8commit cleanout failures: callbac                 23            0.0          0.0commit cleanout failures: cannot                   2            0.0          0.0commit cleanouts                              68,320           59.9        111.8commit cleanouts successfully com             67,816           59.5        111.0consistent changes                           236,143          207.1        386.5consistent gets                            1,839,676        1,613.8      3,010.9consistent gets - examination                532,289          466.9        871.2cursor authentications                            11            0.0          0.0data blocks consistent reads - un            236,143          207.1        386.5db block changes                           5,925,401        5,197.7      9,697.9db block gets                              3,048,309        2,674.0      4,989.1deferred (CURRENT) block cleanout             66,947           58.7        109.6enqueue conversions                              163            0.1          0.3enqueue releases                           2,943,720        2,582.2      4,817.9                                                              678,16        55%
enqueue requests                           2,943,714        2,582.2      4,817.9enqueue timeouts                                   0            0.0          0.0enqueue waits                                     30            0.0          0.1execute count                                 13,565           11.9         22.2free buffer inspected                             25            0.0          0.0free buffer requested                        826,774          725.2      1,353.2hot buffers moved to head of LRU                  17            0.0          0.0immediate (CR) block cleanout app              1,986            1.7          3.3immediate (CURRENT) block cleanou                424            0.4          0.7leaf node splits                                  15            0.0          0.0logons cumulative                                 47            0.0          0.1messages received                              2,296            2.0          3.8messages sent                                  2,296            2.0          3.8no buffer to keep pinned count                92,414           81.1        151.3no work - consistent read gets             1,288,209        1,130.0      2,108.4opened cursors cumulative                      4,639            4.1          7.6parse count (failures)                             1            0.0          0.0parse count (hard)                                28            0.0          0.1
^LTablespace IO Stats for DB: ORCL  Instance: orcl  Snaps: 6 -7
->ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TREASURY
       117,677     103    0.1     6.2           83        0          0    0.0
UNDOTBS
             7       0    0.0     1.0       40,943       36         99    0.3
USERS
           811       1   12.0     1.8        6,264        5          0    0.0
TOOLS
             7       0    0.0     1.0          109        0          0    0.0
TREASINDEX
            20       0    5.5     1.0           74        0          0    0.0
SYSTEM
            24       0    2.5     1.6           16        0          0    0.0
INDX
                                                              753,16        63%
Enqueue activity for DB: ORCL  Instance: orcl  Snaps: 6 -7
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by waits desc, requests desc

                                                       Avg Wt        Wait
Eq     Requests    Succ Gets Failed Gets       Waits  Time (ms)    Time (s)
-- ------------ ------------ ----------- ----------- ----------- ------------
TX          569          568           0          30    7,356.07          221
          -------------------------------------------------------------
  • 1. Re: enqueue and logfile parallel write
    647213 Newbie
    Currently Being Moderated
    Going off that report, your 'db file scattered read' count sticks out like a sore thumb. This may indicate waits relating to full table scans, so check any problematic SQL statement are using an index (that's presuming using an index is more suitable than using a full table scan of course!)
  • 2. Re: enqueue and logfile parallel write
    PavanKumar Guru
    Currently Being Moderated
    Hi,

    Try to Figure out from "V$ENQUEUE_STAT" so that we dig up further under which type of class the waits corresponds too.

    log file parallel write, LGWR writes takes place due to Sync Writes (21.87%) from the report.
    Can post the further results of report with respect to the "Redo" stats, it will helpful to analyze further the root cause of this waits.

    - Pavan Kumar N
  • 3. Re: enqueue and logfile parallel write
    591309 Explorer
    Currently Being Moderated
    Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
    -------------------------- -------------- ----------- ----------- ------------
    cache buffers lru chain         1,912,235         467           2 465/2/0/0/0
    enqueues                       11,772,298         460           1 459/1/0/0/0
    cache buffers chains           19,554,549         142           3 0/0/0/0/0
    enqueue hash chains             5,887,626         126           2 124/2/0/0/0
    virtual circuit buffers           265,444          51          16 35/16/0/0/0
    redo allocation                 2,948,395          48           1 47/1/0/0/0
    virtual circuit queues            117,984          29           1 28/1/0/0/0
    checkpoint queue latch            218,844          11           2 9/2/0/0/0
              -------------------------------------------------------------
  • 4. Re: enqueue and logfile parallel write
    591309 Explorer
    Currently Being Moderated
    SQL> /
       INST_ID EQ TOTAL_REQ# TOTAL_WAIT#  SUCC_REQ# FAILED_REQ# CUM_WAIT_TIME       
    ---------- -- ---------- ----------- ---------- ----------- -------------       
             1 CF      11363           0      11363           0             0       
             1 CI       4874           0       4874           0             0       
             1 CU       1050           0       1050           0             0       
             1 DL         31           0         31           0             0       
             1 DR         15           0         15           0             0       
             1 DV        138           0        138           0             0       
             1 HW       3345           0       3345           0             0       
             1 IA          1           0          1           0             0       
             1 IS         93           0         93           0             0       
             1 KM   34619512           0   34619512           0             0       
             1 KT          1           0          1           0             0       
             1 LD          1           0          1           0             0       
             1 MR        561           0        561           0             0       
             1 PE        607           0        607           0             0       
             1 PF          1           0          1           0             0       
             1 PR        796           0        796           0             0       
             1 RT          1           0          1           0             0       
       INST_ID EQ TOTAL_REQ# TOTAL_WAIT#  SUCC_REQ# FAILED_REQ# CUM_WAIT_TIME       
    ---------- -- ---------- ----------- ---------- ----------- -------------       
             1 SQ         42           0         42           0             0       
             1 SR          1           0          1           0             0       
             1 SS          1           0          1           0             0       
             1 SW          3           0          3           0             0       
             1 TA        526           0        526           0             0       
             1 TD         77           0         77           0             0       
             1 TM      14085           0      14085           0             0       
             1 TS          3           0          3           0             0       
             1 TT       3476           0       3476           0             0       
             1 TX       9493         490       9339         154      10097567       
             1 US       6876           0       6876           0             0       
             1 WL         88           0         88           0             0       
  • 5. Re: enqueue and logfile parallel write
    PavanKumar Guru
    Currently Being Moderated
    Hi,

    select
    distinct event,
    p1,
    mod(p1,16) as "mode"
    from v$active_session_history
    where event like 'enq:%';

    what is the output of above query..

    - Pavan Kumar N
  • 6. Re: enqueue and logfile parallel write
    591309 Explorer
    Currently Being Moderated
    Hi pavan,
    I am on 9i, so v$active_session_history view would not exist.
  • 7. Re: enqueue and logfile parallel write
    CharlesHooper Expert
    Currently Being Moderated
    MYH wrote:
    Hi all,
    9.0.1.
    I am encountering enqueue and log file parallel write waits.
    HOw should i go for troubleshooting?This is a small snapshot of the statspack report.
    MYH,

    Based on a calculation it appears that the Statspack duration is about 20 minutes (this is likely reasonable). I did not spend a lot of time analyzing the information that you posted, so a quick summary of what I see that looks interesting:
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts  Time (s)    (ms)     /txn
    ---------------------------- ------------ ---------- ---------- ------ --------
    enqueue                                83         58        208   2509      0.1
    resmgr:waiting in end wait            265          0         25     94      0.4
    log file parallel write             1,939          0         22     11      3.2
    resmgr:waiting in check                 8          0         17   2112      0.0
    In the above, notice that there are 58 time outs for the enqueue. The timeout period for an enqueue is 3 seconds, which means that you potentially had a single wait that lasted 58*3 (+ up to 2.99) seconds, which totals 174 (+ up to 2.99) seconds. To make the calculation simple, assume that this was a single wait lasting exactly 174 seconds. The remaining 47 seconds of the wait divided by 25 (83-58) means that the remaining average wait time is 1.88 seconds. With Oracle 9.0.1 it is more difficult to see how much time was spent in each of the different types of enqueues compared to Oracle 10g and above.

    Looking at the enqueue information that you posted:
                                                           Avg Wt        Wait
    Eq     Requests    Succ Gets Failed Gets       Waits  Time (ms)    Time (s)
    -- ------------ ------------ ----------- ----------- ----------- ------------
    TX          569          568           0          30    7,356.07          221
    It appears that all of the enqueue activity is of the type TX, while are row level enqueues - usually meaning that someone updated a row, and there was a pause between the time that the update was performed and a commit or rollback was executed. There are multiple causes for TX enqueues which may not be found with a Statspack report. From my notes (possibly based on the Oracle Performance Tuning Guide):
    * TX enqueues are acquired exclusive when a transaction initiates its first change and held until the transaction COMMITs or ROLLBACK.
    * Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. The wait event for mode 6 TX enqueues is 'enq: TX - row lock contention' on 10g and above.
    * Waits for TX in mode 4 can occur when a session wants to lock a row in the block, but one or more other sessions have rows locked in the same block and there is no free Interested Transaction LIST slot in the block. Oracle will usually dynamically add another ITL slot if there is sufficient space in the block. To fix this problem, increase INITRANS or MAXTRANS for the table. This type of TX enqueue corresponds to the wait event 'enq: TX - allocate ITL' on 10g and above.
    * Waits for TX in mode 4 can also occur if the session is waiting for potential duplicates in a UNIQUE index, which would occur if a second session inserts a value before the first tries to insert the same value, and the second session has not yet committed. This type of enqueue corresponds to the wait event: 'enq: TX - row lock contention' on 10g and above.
    * 'Waits for TX in mode 4 can also occur when a transaction is inserting a row into an index and must wait for the end of an index block split being done by another transaction. This type of TX enqueue wait corresponds to the wait event 'enq: TX - index' on 10g and above.

    If you are able to notice when the enqueue happens, a query against V$LOCK and V$SESSION will help you determine the specific type of enqueue, and what session is the blocker. See this article for the process:
    http://jonathanlewis.wordpress.com/2008/02/06/trouble-shooting-2/

    The second highest wait event, "resmgr:waiting in end wait", may play a part in the enqueue wait. In your previous thread "Statspack Report", Jonathan Lewis described this event appearing in your wait events as follows:
    "... you get 'resource manager' waits when the CPU is overloaded and you have a resource management plan to protect some users. "

    You might want to look at the SQL statements sorted by CPU utilization based on the above description.

    Taking a look at a couple SQL statements that you posted:
                                                         CPU      Elapsd
      Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
    --------------- ------------ -------------- ------ -------- --------- ----------                                                           
          3,301,423           91       36,279.4   67.5     0.01      0.01 4086952936
    UPDATE IPAO.EMP_MAST SET OSCAL_CODE=:b1 WHERE :b1 < 600
      
            902,040           30       30,068.0   18.5     0.00      0.00 2333288512
    UPDATE IPAO.MON_ACT SET CHEQUE_NO=:b1,CHEQUE_DT=:b2,CHEQUE_AMT=:b3 WHERE V_NO = :b4  AND V_DATE = :b5
    The first of the two looks interesting. I am a little surprised by the small number for the CPU utilization considering the number of buffer gets. The WHERE clause "WHERE :b1 < 600" seems a little unusual.

    The log file parallel write is a background wait where the redo buffer is flushed to disk - it may or may not be of primary concern to performance.

    A quick look at the statistics shows a lot of data is being sent to the client/application computers (enough to saturate a 100Mb connection), which might indicate a network bottleneck is contributing to the problem:
    Statistic                                      Total     per Second    per Trans
    --------------------------------- ------------------ -------------- ------------
    bytes received via SQL*Net from c          3,714,390        3,258.2      6,079.2
    bytes sent via SQL*Net to client      10,470,217,793    9,184,401.6 ############
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
  • 8. Re: enqueue and logfile parallel write
    591309 Explorer
    Currently Being Moderated
    Hi Charles,
    Thanks for your analysis.
    There is new module added in the application.Since then the database has gone really slow and the enqueue waits are a result of that module.The module updates a single row in three tables everytime it is run.There are three separate update statements used and commit is done only once after the last update statement.
    Also the archive generation is 2-4 minutes.The redo log file is 100mb each.It is a peak load time.Would moving redo log files to a different disk help in reducing contention?Once I noticed by TOP command ARC process was consuming 70% of CPU.

    The first sql listed in statspack is the part of that module.It is taking 67% of CPU time.
    Jonathon's solution was to take statistics. Poor me I am in a situation where I can try for everything besides taking statistics.I am on RBO and CBO just does not use index on existing queries making condition worse[I would like to know why?]
    What is the unusuality about that SQL statement?Kindly guide,I may try to change the code.
  • 9. Re: enqueue and logfile parallel write
    CharlesHooper Expert
    Currently Being Moderated
    MYH wrote:
    Hi Charles,
    Thanks for your analysis.
    There is new module added in the application.Since then the database has gone really slow and the enqueue waits are a result of that module.The module updates a single row in three tables everytime it is run.There are three separate update statements used and commit is done only once after the last update statement.
    Also the archive generation is 2-4 minutes.The redo log file is 100mb each.It is a peak load time.Would moving redo log files to a different disk help in reducing contention?Once I noticed by TOP command ARC process was consuming 70% of CPU.
    The application may be executing in this manner to intentionally serialize access to the various tables involved, should another session attempt to perform the same action. If that is the case, the application is likely working as designed and the enqueue waits should be mostly ignored. However, your goal then becomes, assuming that the design of the application is correct, to minimize the time required for the updates to complete (I think that you have already reached the same conclusion).

    The typical recommendation is that redo logs switch once every 15 to 20 minutes, so you may need much larger redo logs (possibly 512MB). It is possible that moving the redo logs to a separate set of disks will help, or hurt. If the redo logs are currently spread across an 8+ drive RAID 10 array with several other Oracle related files, you will likely find that the database performance will decrease by moving the redo log files to a 2 spindle RAID 1 array that is used only for redo logs.
    The first sql listed in statspack is the part of that module.It is taking 67% of CPU time.
    That is not what the statistics seemed to indicate (maybe I looked at the wrong statistics), but based on the number of logical reads, I can understand why a query like this would appear near the top of the SQL statements sorted by CPU:
    UPDATE IPAO.EMP_MAST SET OSCAL_CODE=:b1 WHERE :b1 < 600
    In the above, depending on what is passed in for the bind variable b1, either all rows will be updated in the table (even if OSCAL_CODE was already equal to the bind variable b1), or none of the rows will be updated. It might be interesting to gather a 10046 trace at level 12 for a session executing the above SQL statement to gain a better understanding of what the application is trying to accomplish.
    Jonathon's solution was to take statistics. Poor me I am in a situation where I can try for everything besides taking statistics.I am on RBO and CBO just does not use index on existing queries making condition worse[I would like to know why?]
    What is the unusuality about that SQL statement?Kindly guide,I may try to change the code.
    I tried to explain what I found a bit unusual about the SQL statement in the above section of my reply (answered above).

    As I recall, Jonathan suggested (later in the other thread) that you should either have no statistics so that in effect the RULE based optimizer will continue to be used, or to upgrade to a more recent release of Oracle as Oracle 10.2.0.1 did not seem to have the same issues as 9.0.1 when statistics were gathered (I believe that was one of his suggestions), or to leverage the features in the cost based optimizer to correct the performance problems - I believe that I suggested the use of stored outlines, and I believe that Jonathan also suggested that as a possible approach to stabilize the execution plans to known good plans.

    You probably did not see my last reply in the Statspack Report before that thread was removed from the forum. This is a repost of that response:
    -------------------------------------------------------------------
    MYH wrote:
    Hers is the explain plan of the same query on 10.2.0.1
    (snip)
    All the options for optimizer are set as default.So in 10g it use the index.

    MYH,

    Thanks for posting the explain plans for Oracle 9.0.1 and 10.2.0.1. One of the problems with explain plans is that explain plans do not always show the actual execution plan, and that becomes a more significant problem when bind variables are included in the SQL statements. You will often have better luck extracting the actual execution plan from a 10046 trace.

    Reposting what you have included for quick reference, starting first with what I believe is the query that was used to generate the plan:
    CPU      Elapsd
       Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
          1,038,323           53       19,591.0   12.8   174.09    197.94 4147401526
    SELECT NVL(SUM(MON_DED),0)   FROM IPAO.MON_DED  WHERE DED_CODE =
     :b1  AND TO_CHAR(UPTO_DATE,'MON-YYYY') = :b2  AND EMP_CODE IN (
    SELECT EMP_CODE   FROM IPAO.MON_ACT  WHERE PROCESS_ID = :b3  AND
     OFF_CODE = :b4  AND BILL_CODE = :b5 )
      
    9.0.1
    Plan Table
    ---------------------------------------------------------
    Operation Name                         Rows  Bytes Cost  
    ---------------------------------------------------------
    | SELECT STATEMENT                    |   1 | 64 | 2891 |  
    |  SORT AGGREGATE                     |   1 | 64 |      |
    |   HASH JOIN SEMI                    |   1 | 64 | 2891 | 
    |    TABLE ACCESS FULL |MON_DED       | 817 | 23K| 2881 |
    |    TABLE ACCESS BY INDEX  R|MON_ACT |   1 | 34 |   10 |
    |     INDEX RANGE SCAN  |IDXMONACTPROCID  1 |  3 |      |
       
    10.2.0.1
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------------------
    | Id  | Operation                       | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                |                 |     1 |    64 |    35   (3)| 00:00:01 |
    |   1 |  SORT AGGREGATE                 |                 |     1 |    64 |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID   | MON_DED         |     1 |    30 |     3   (0)| 00:00:01 |
    |   3 |    NESTED LOOPS                 |                 |     1 |    64 |    35   (3)| 00:00:01 |
    |   4 |     SORT UNIQUE                 |                 |     1 |    34 |    31   (0)| 00:00:01 |
    |*  5 |      TABLE ACCESS BY INDEX ROWID| MON_ACT         |     1 |    34 |    31   (0)| 00:00:01 |
    |*  6 |       INDEX RANGE SCAN          | IDXMONACTPROCID |   422 |       |     4   (0)| 00:00:01 |
    |*  7 |     INDEX RANGE SCAN            | SYS_C0076919    |     1 |       |     2   (0)| 00:00:01 |
    From the explain plans, it appears that Oracle 9.0.1 did not make much of an attempt to rewrite the SQL statement. If I am reading the plan correctly, Oracle expects 817 rows to make it out of the predicates applied to the full tablescan of MON_DED, and then for each one of those rows the combination of the MON_ACT table and the IDXMONACTPROCID index are probed to determine if a row returned from MOD_DED table should be returned to the client. A hash join was selected.

    Oracle 10.2.0.1 rewrote the SQL statement into a more efficient form, essentially retrieving a distinct list of EMP_CODE from the MON_ACT table, which was joined to the primary key column on the MON_DED table. If you are able to manually rewrite the SQL statement in Oracle 9.0.1, you might achieve similar performance. Jonathan, or one of the other contributors to this thread, might be able to tell you if the transformation is related to a specific optimizer parameter which needs to be adjusted in 9.0.1 to achieve a similar result without having to rewrite the SQL statements manually. A 10053 trace from both versions of Oracle for a hard parse of this SQL statement might be helpful.

    Oracle 10.2.0.1 essentially rewrote the SQL statement to look something like this:
    SELECT /*+ ORDERED */
      NVL(SUM(MON_DED.MON_DED),0)
    FROM
      (SELECT DISTINCT
        EMP_CODE
      FROM
        IPAO.MON_ACT
      WHERE
        PROCESS_ID = :b3
        AND OFF_CODE = :b4
        AND BILL_CODE = :b5) EC,
      IPAO.MON_DED
    WHERE
      EC.EMP_CODE=IPAO.EMP_CODE;
    -------------------------------------------------------------------

    One of the options that you have is to create stored outlines for the queries while they are running quickly with the rule based optimizer (I believe that is an option with when the rule based optimizer is in use, but you should probably verify that this is the case). Then when you start gathering good statistics, the execution plans that you have now should remain.

    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points