This discussion is archived
1 2 3 4 Previous Next 46 Replies Latest reply: Jan 6, 2013 7:35 AM by yxes2013 Go to original post RSS
  • 30. Re: Redo Logs Sizing
    sb92075 Guru
    Currently Being Moderated
    please learn how to correctly use
     tags
    
    
    How do I ask a question on the forums? 
    SQL and PL/SQL FAQ
    
    
    [code]
    top - 18:19:07 up 7 days,  8:08,  3 users,  load average: 0.02, 0.05, 0.01
    Tasks: 163 total,   3 running, 160 sleeping,   0 stopped,   0 zombie
    Cpu(s):  0.7%us,  1.0%sy,  0.0%ni, 98.0%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:    909120k total,   555808k used,   353312k free,    64672k buffers
    Swap:  1735012k total,   399056k used,  1335956k free,   361496k cached
    
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
     2526 root      15   0 33172 8468 3436 R  0.7  0.9  33:17.24 Xorg               
     2229 oracle    16   0  602m 9728 9368 S  0.3  1.1  11:48.19 oracle             
     2857 oracle    15   0 47824 2580 1912 S  0.3  0.3   7:20.54 gnome-power-man    
    17238 oracle    15   0  2424 1060  800 R  0.3  0.1   0:00.03 top                
        1 root      15   0  2156  292  268 S  0.0  0.0   0:11.40 init           
    [/code]
    
    post results from top as above & vmstat as below
    
    [code]
    [oracle@localhost ~]$ vmstat 6 5
    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     0  0 399044 349612  64760 361516    4    3    10    31   54   47  1  2 97  0  0
     0  0 399044 349496  64776 361544    0    0     0    21 1016  985  1  1 98  0  0
     0  0 399044 353316  64784 361544    0    0     0    13 1001 1027  1  2 97  0  0
     0  0 399044 349744  64824 361552    0    0     1    54 1004 1056  2  4 94  0  0
     0  0 399044 349488  64848 361552    0    0     0    38 1003 1057  1  1 98  0  0
    [oracle@localhost ~]$ 
    [/code]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
  • 31. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    Oh there you are :) then I run vmstat, then what :(

    Actually after 2 days of running the programs the backlog is finished. But the damage has been done.
    The management just not want this problem to happen again.

    Supposing the issue repeats again, Then my boss ask me to check the system whats causing performance issue.

    Can you list some steps to be done aside from panicking?

    0. Panic and chilling.
    1. Run top ( and I see 12 core get 100 or 99% ) so what.
    2. Run vmstat ( how do I see the thing causing the issue?)
    3. Select v$ dynamic performance table? what do I check here?
    4. Kill programs with high resource usage.
    5. Restart the database and Server.




    [oraprod@oel5 ~]$ vmstat 6 5
    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     2  0  84856  60708 111664 11060748    0    0   375    76    1    9 12  1 84  2  0
     2  0  84856  67776 111680 11060804    0    0     0   270  386  232 13  4 83  0  0
     2  0  84856  65116 111712 11061220    0    0    37   185  531  334 12  5 83  0  0
     2  0  84856  65116 111720 11061228    0    0     0    22  357  143 12  5 83  0  0
     2  0  84856  65488 111776 11061224    0    0     0    71  388  162 12  5 83  0  0
    [oraprod@oel5 ~]$
    
    top - 11:21:44 up 2 days,  4:03,  1 user,  load average: 1.21, 0.63, 0.59
    Tasks: 365 total,   3 running, 362 sleeping,   0 stopped,   0 zombie
    Cpu(s):  7.9%us,  0.5%sy,  0.0%ni, 91.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:  12079104k total, 11994044k used,    85060k free,   111456k buffers
    Swap:  6104692k total,    84884k used,  6019808k free, 11079208k cached
    
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
     5033 oraprod   25   0 1691m 184m 175m R 100.0  1.6   1:13.23 oracle
      458 oraprod   15   0 1685m 134m 132m S  0.7  1.1   0:16.33 oracle
        1 root      15   0  2184  684  580 S  0.0  0.0   0:00.70 init
        2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.02 migration/0
        3 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0
        4 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/0
        5 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/1
        6 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/1
        7 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/1
    Edited by: yxes2013 on 4.1.2013 18:46
  • 32. Re: Redo Logs Sizing
    sb92075 Guru
    Currently Being Moderated
    Can't tune an idle system.

    I have no idea if bottleneck is CPU, RAM, I/O, or network.
    Unless & until you know which resource is being over subscribed, you are simply shooting in the dark at some possible solution & hoping to get lucky.
  • 33. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    Thanks,
    I have no idea if bottleneck is CPU, RAM, I/O, or network.
    So I can see the bottleneck by using vmstat and which can not be seen in statspack report?
    Can you tell me how please.
  • 34. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    Hi All,

    Can you tell me what do you do (your SOP), when there exist a performance issue(all users complaining slow response time) and your boss ask you to check the database & server systems?


    Thanks a lot
  • 35. Re: Redo Logs Sizing
    sb92075 Guru
    Currently Being Moderated
    Ask the Unix System Administrator for assistance in identifying which system resource is bottleneck when system is under load & slowing down.

    a post mortem can be done by examining previously collected sar data; if you know what to look for.
  • 36. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    But I know this can be done inside Oracle too :(
  • 37. Re: Redo Logs Sizing
    sb92075 Guru
    Currently Being Moderated
    yxes2013 wrote:
    But I know this can be done inside Oracle too :(
    Nobody here prevents you from doing so.
  • 38. Re: Redo Logs Sizing
    JohnWatson Guru
    Currently Being Moderated
    Hello, yxes. You have had a few people commenting on this - hope you don't mind me butting in.
    First, your online logfiles are 10M. If your EBS instance is configured like most, you have log_buffer=10M too. So potentially, every LGWR write will force a log switch. This is awful. So even though you aren't actually generating much redo, I would use 500M log files. Several people have recommended this. It may solve a lot of problems.
    Second, your system is almost certainly memory bound. You say that your SGA is restricted to max 1.6G because of 32bit environment, that isn't correct. You can adjust the load address of the Oracle processes to make about 2.7G available (awkward), and/or you can use indirect buffers to increase the buffer cache further (easier). I haven't had to do this for years, but google around and look on MOS and you'll find details.
    Third, your ora-1555 errors, sure you need to raise the undo_retention, but that may not help unless you also increase the size of your undo tablespace. But to have any chance of fixing ora-1555 errors, you really need the 9.2.0.8 patchset applied.
    Fourth, you say the problem is with AutoInvoice. There may be a lot you can do do tune AutoInvoice, but first do the above, and the AutoInvoice issues may go away.
    You can get all this done before next month end, no problem. Do the redo and undo changes right now, apply the patchset tomorrow, research memory in the week and make the changes next weekend.
    That's the plan I would follow.
  • 39. Re: Redo Logs Sizing
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    yxes2013 wrote:
    And each of the account manager is asking to prioritize his/her account for delivery deadline. Do I need to cache the entire table to the memory? or partition the table by account and put it
    on separate disks. So that each have its own I/O? Can you check if creating additional indexes will improve performance? How do I know which columns need more indexes based on the statspack? Do I need to display each program EXPLAIN PLAN? My instinct tells me that I must create an index to improve the performance.
    You do NOT have an I/O problem - almost all your I/O is coming from the file system cache, fiddling with discs won't make a difference.
    Since you didn't look at the SQL ordered by physical reads I did. The top statement is not the same as the top by buffer gets, so you have TWO statements to optimise; however the Top by physical reads (281M blocks read in 2 hours) appears to be calling the next three statements (94M each blocks read in 2 hours) - so either they need to be fixed, or the plan of the top statement needs to be controlled to call the other statements far less frequently - they appear to be called from a PL/SQL package, so perhaps you have a pl/sql function call in the main query that is being called too frequently because of a change in execution plan.
    This kind of batch jobs can not afford to be delayed since this a critical need for product delivery to lots of client. For the problem which caused the delay yesterday caused cancelled orders worth P5million says the management. + plus the penalty for not able to meet the service level agreement.
    This may be why joel garry suggested getting in some expertise - I've already told you what you need to do, but you haven't followed up my suggestions. If your company is losing P5million (I can't guess which currency the P stands for) then perhaps they should spend a few Pthousand getting someone to sort out their problem quickly.



    Regards
    Jonathan Lewis
  • 40. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    Thanks Jon Watson & Lewis,

    But I know in my instinct that the problem would be lack of indexes because that always have been the cause of slow performing sql.

    I tried to get the EXPLAIN PLAN of the top sql resource consumer to get which column search it used, using its hash value in statspack report but I does not show in level 6 statspack.
    Tha docs said that I have to set statspack to level to get explain plan. But its not showing in my report. How do I check my current level statspack?
    ^LSQL ordered by Gets for DB: DEV  Instance: DEV  Snaps: 247 -248
    -> End Buffer Gets Threshold:     10000
    -> Note that resources reported for PL/SQL includes the resources used by
       all SQL statements called within the PL/SQL code.  As individual SQL
       statements are also reported, it is possible and valid for the summed
       total % to exceed 100
    
                                                         CPU      Elapsd
      Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
    --------------- ------------ -------------- ------ -------- --------- ----------
          3,809,892            5      761,978.4   60.9     6.73     23.80 2076298416
    Module: FNDWFBG
    BEGIN WF_ENGINE.BACKGROUNDCONCURRENT( :errbuf, :rc,:A0,:A1,:A2,:
    A3,:A4,:A5); END;
    
          3,619,193            5      723,838.6   57.8     4.75     16.11 2881611387
    Module: FNDWFBG
    SELECT /*+ ORDERED USE_NL (WIASP WI WPAP WAP)                 IN
    DEX (WIASP WF_ITEM_ACTIVITY_STATUSES_N1) */ WIASP.ROWID ROW_ID F
    ROM WF_ITEM_ACTIVITY_STATUSES WIASP, WF_ITEMS WI, WF_PROCESS_ACT
    IVITIES WPAP, WF_ACTIVITIES WAP WHERE WIASP.PROCESS_ACTIVITY = W
    PAP.INSTANCE_ID AND WPAP.ACTIVITY_ITEM_TYPE = WAP.ITEM_TYPE AND
    
          1,548,084            3      516,028.0   24.7     1.68      1.64  229079020
    Module: FNDOAMCOL
    begin fnd_oam_collection.refresh_exceptions_summary; end;
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    ------------ ----------- ------------ -------- ----------- ------- ------------
    DEV           3856510945 DEV                 1 9.2.0.6.0   NO      oel5.
                                                                       e.local
    
     Start Id     Start Time         End Id      End Time       Duration(mins)
    --------- ------------------- --------- ------------------- --------------
          247 05-Jan-13 21:01:23        248 05-Jan-13 21:27:26           26.05
    
    SQL Statistics
    ~~~~~~~~~~~~~~
    -> CPU and Elapsed Time are in seconds (s) for Statement Total and in
       milliseconds (ms) for Per Execute
                                                           % Snap
                         Statement Total      Per Execute   Total
                         ---------------  ---------------  ------
            Buffer Gets:       3,809,892        761,978.4   60.88
             Disk Reads:         173,005         34,601.0   72.42
         Rows processed:               5              1.0
         CPU Time(s/ms):               7          1,346.5
     Elapsed Time(s/ms):              24          4,759.3
                  Sorts:               0               .0
            Parse Calls:               5              1.0
          Invalidations:               0
          Version count:               1
        Sharable Mem(K):              12
             Executions:               5
    
    SQL Text
    ~~~~~~~~
    BEGIN WF_ENGINE.BACKGROUNDCONCURRENT( :errbuf, :rc,:A0,:A1,:A2,:
    A3,:A4,:A5); END;
    
    Plans in shared pool between Begin and End Snap Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Shows the Execution Plans found in the shared pool between the begin and end
    snapshots specified.  The values for Rows, Bytes and Cost shown below are those
    which existed at the time the first-ever snapshot captured this plan - these
    values often change over time, and so may not be indicative of current values
    -> Rows indicates Cardinality, PHV is Plan Hash Value
    -> ordered by Plan Hash Value
    
    --------------------------------------------------------------------------------
    | Operation                      | PHV/Object Name     |  Rows | Bytes|   Cost |
    --------------------------------------------------------------------------------
    --------------------------------------------------------------------------------
    
    End of Report
    Edited by: yxes2013 on 5.1.2013 19:48
  • 41. Re: Redo Logs Sizing
    yxes2013 Newbie
    Currently Being Moderated
    My apologies the hashvalue i got was incorrect.

    This is the correct output now. Can you tell me if I need to add index on some columns? or the performance is ok even if the sql uses high disk resource?
    Is the better tracing the statspack? like set init.ora trace=true?
    STATSPACK SQL report for Hash Value: 2881611387  Module: FNDWFBG
    
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    ------------ ----------- ------------ -------- ----------- ------- ------------
    DEV           3856510945 DEV                 1 9.2.0.6.0   NO      oel5.
                                                                       e.local
    
     Start Id     Start Time         End Id      End Time       Duration(mins)
    --------- ------------------- --------- ------------------- --------------
          247 05-Jan-13 21:01:23        248 05-Jan-13 21:27:26           26.05
    
    SQL Statistics
    ~~~~~~~~~~~~~~
    -> CPU and Elapsed Time are in seconds (s) for Statement Total and in
       milliseconds (ms) for Per Execute
                                                           % Snap
                         Statement Total      Per Execute   Total
                         ---------------  ---------------  ------
            Buffer Gets:       3,619,193        723,838.6   57.84
             Disk Reads:           1,456            291.2     .61
         Rows processed:               0              0.0
         CPU Time(s/ms):               5            950.5
     Elapsed Time(s/ms):              16          3,222.5
                  Sorts:               0               .0
            Parse Calls:               5              1.0
          Invalidations:               0
          Version count:               1
        Sharable Mem(K):              39
             Executions:               5
    
    SQL Text
    ~~~~~~~~
    SELECT /*+ ORDERED USE_NL (WIASP WI WPAP WAP)                 IN
    DEX (WIASP WF_ITEM_ACTIVITY_STATUSES_N1) */ WIASP.ROWID ROW_ID F
    ROM WF_ITEM_ACTIVITY_STATUSES WIASP, WF_ITEMS WI, WF_PROCESS_ACT
    IVITIES WPAP, WF_ACTIVITIES WAP WHERE WIASP.PROCESS_ACTIVITY = W
    PAP.INSTANCE_ID AND WPAP.ACTIVITY_ITEM_TYPE = WAP.ITEM_TYPE AND
    WPAP.ACTIVITY_NAME = WAP.NAME AND WIASP.ITEM_TYPE = WI.ITEM_TYPE
     AND WIASP.ITEM_KEY = WI.ITEM_KEY AND WI.BEGIN_DATE >= WAP.BEGIN
    _DATE AND WI.BEGIN_DATE < NVL(WAP.END_DATE, WI.BEGIN_DATE+1) AND
     WAP.TYPE = 'PROCESS' AND WIASP.ACTIVITY_STATUS = 'ACTIVE' AND N
    OT EXISTS (SELECT NULL FROM WF_ITEM_ACTIVITY_STATUSES WIASC, WF_
    PROCESS_ACTIVITIES WPAC WHERE WAP.ITEM_TYPE = WPAC.PROCESS_ITEM_
    TYPE AND WAP.NAME = WPAC.PROCESS_NAME AND WAP.VERSION = WPAC.PRO
    CESS_VERSION AND WPAC.INSTANCE_ID = WIASC.PROCESS_ACTIVITY AND W
    IASC.ITEM_TYPE = DECODE(WAP.DIRECTION, WAP.DIRECTION, WI.ITEM_TY
    PE, WI.ITEM_TYPE) AND WIASC.ITEM_KEY = WI.ITEM_KEY AND WIASC.ACT
    IVITY_STATUS IN ('ACTIVE', 'NOTIFIED', 'SUSPEND', 'DEFERRED', 'E
    RROR'))
    
    
    Known Optimizer Plan(s) for this Hash Value
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Shows all known Optimizer Plans for this database instance, and the Snap Id's
    they were first found in the shared pool.  A Plan Hash Value will appear
    multiple times if the cost has changed
    -> ordered by Snap Id
    
     First        First          Plan
    Snap Id     Snap Time     Hash Value        Cost
    -------- --------------- ------------ ----------
         245 05 Jan 13 21:00    419719774    132833
    
    Plans in shared pool between Begin and End Snap Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Shows the Execution Plans found in the shared pool between the begin and end
    snapshots specified.  The values for Rows, Bytes and Cost shown below are those
    which existed at the time the first-ever snapshot captured this plan - these
    values often change over time, and so may not be indicative of current values
    -> Rows indicates Cardinality, PHV is Plan Hash Value
    -> ordered by Plan Hash Value
    
    --------------------------------------------------------------------------------
    | Operation                      | PHV/Object Name     |  Rows | Bytes|   Cost |
    --------------------------------------------------------------------------------
    |SELECT STATEMENT                |----- 419719774 -----|       |      | 132833 |
    |FILTER                          |                     |       |      |        |
    | TABLE ACCESS BY INDEX ROWID    |WF_ACTIVITIES        |     1 |   57 |      3 |
    |  NESTED LOOPS                  |                     |     1 |  143 | 132827 |
    |   NESTED LOOPS                 |                     |    25K|    2M|  55727 |
    |    NESTED LOOPS                |                     |    25K|    1M|  30027 |
    |     TABLE ACCESS BY INDEX ROWID|WF_ITEM_ACTIVITY_STA |    25K|  878K|   4249 |
    |      INDEX RANGE SCAN          |WF_ITEM_ACTIVITY_STA |    25K|      |   1049 |
    |     TABLE ACCESS BY INDEX ROWID|WF_ITEMS             |     1 |   22 |      2 |
    |      INDEX RANGE SCAN          |WF_ITEMS_PK          |     1 |      |      1 |
    |    TABLE ACCESS BY INDEX ROWID |WF_PROCESS_ACTIVITIE |     1 |   29 |      1 |
    |     INDEX UNIQUE SCAN          |WF_PROCESS_ACTIVITIE |     1 |      |        |
    |   INDEX RANGE SCAN             |WF_ACTIVITIES_PK     |     1 |      |      2 |
    | NESTED LOOPS                   |                     |     1 |   65 |      6 |
    |  TABLE ACCESS BY INDEX ROWID   |WF_ITEM_ACTIVITY_STA |     1 |   28 |      5 |
    |   INDEX RANGE SCAN             |WF_ITEM_ACTIVITY_STA |     2 |      |      4 |
    |  TABLE ACCESS BY INDEX ROWID   |WF_PROCESS_ACTIVITIE |     1 |   37 |      1 |
    |   INDEX UNIQUE SCAN            |WF_PROCESS_ACTIVITIE |     1 |      |        |
    --------------------------------------------------------------------------------
    
    End of Report
    Edited by: yxes2013 on 5.1.2013 19:44
  • 42. Re: Redo Logs Sizing
    sb92075 Guru
    Currently Being Moderated
    yxes2013 wrote:
    My apologies the hashvalue i got was incorrect.

    This is the correct output now. Can you tell me if I need to add index on some columns? or the performance is ok even if the sql uses high disk resource?
    Is the better tracing the statspack? like set init.ora trace=true?
    STATSPACK SQL report for Hash Value: 2881611387  Module: FNDWFBG
    
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    ------------ ----------- ------------ -------- ----------- ------- ------------
    DEV           3856510945 DEV                 1 9.2.0.6.0   NO      oel5.
    e.local
    
    Start Id     Start Time         End Id      End Time       Duration(mins)
    --------- ------------------- --------- ------------------- --------------
    247 05-Jan-13 21:01:23        248 05-Jan-13 21:27:26           26.05
    
    SQL Statistics
    ~~~~~~~~~~~~~~
    -> CPU and Elapsed Time are in seconds (s) for Statement Total and in
    milliseconds (ms) for Per Execute
    % Snap
    Statement Total      Per Execute   Total
    ---------------  ---------------  ------
    Buffer Gets:       3,619,193        723,838.6   57.84
    Disk Reads:           1,456            291.2     .61
    Rows processed:               0              0.0
    CPU Time(s/ms):               5            950.5
    Elapsed Time(s/ms):              16          3,222.5
    Sorts:               0               .0
    Parse Calls:               5              1.0
    Invalidations:               0
    Version count:               1
    Sharable Mem(K):              39
    Executions:               5
    ZERO Rows Processed by posted SQL for a TOTAL Elapsed Time of about 16 seconds!
    I am not convinced that this SQL is the root cause of the reported system constipation.
  • 43. Re: Redo Logs Sizing
    JohnWatson Guru
    Currently Being Moderated
    I've already told you exactly, with an action plan, what I would do.

    I do not think that anything can be gained by my commenting on some statement that you ran 5 times in 26 minutes with an average elapsed time of 3.2 seconds.
    Perhaps someone else will help.
  • 44. Re: Redo Logs Sizing
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    yxes2013 wrote:
    My apologies the hashvalue i got was incorrect.

    This is the correct output now. Can you tell me if I need to add index on some columns? or the performance is ok even if the sql uses high disk resource?
    Is the better tracing the statspack? like set init.ora trace=true?
    STATSPACK SQL report for Hash Value: 2881611387  Module: FNDWFBG
    Looking at your earlier output and the full trace file you posted on the file-sharing service, the top consumer for resources was 835260576 by a long way, not the example you've just posted. The example you've posted looks as if it could do with testing with the hints removed, and for this case, tracing would be the next step (unless you know the data very well) so that you can see if the predictions match reality and, in particular, how much work is done collecting data BEFORE the "not exists" subquery, and how many times the subquery runs.

    I'm going to try to find a little some time to write a few notes about your full trace this evening, or maybe tomorrow evening, I'll post a note here when I've published it.
    To answer your question about indexes - the report shows three statements that are probably using full tablescans when they could probably be using indexed access paths. This may mean you need to add an index or two, but it may mean the current statistics are misleading or, for other reasons, Oracle is not using an index that it should be using. We can't tell until you extract the bad plans (and see if there were previously some good plans in use).



    Regards
    Jonathan Lewis

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points