This discussion is archived
1 2 3 Previous Next 41 Replies Latest reply: Oct 18, 2012 1:39 PM by 439902 Go to original post RSS
  • 30. Re: Hard Parse Time - too long...
    Mark Malakanov (user11181920) Expert
    Currently Being Moderated
    SQL was formatted by me for ease of reading.
    you are so nice! :)
    We ARE using bind variables in the application. I just plugged in the real values for readability/data type info. Otherwise, every literal that the app sends to the database is as bind variable. We do not include literals in the SQL itself.
    CURSOR_SHARING is currently set to EXACT.
    that is correct
    So, parse times are still there, regardless of dynamic sampling.
    Well, you cannot completely avoid parse when you execute different SQL each time.
    CBO still have to parse, get tables and indexes, their stats, column histograms, access rights ...
    You may decrease efforts of CBO by using hints like LEADING, ORDERED etc.
    But for this your app should be smart enough to generate proper SQLs.
  • 31. Re: Hard Parse Time - too long...
    jgarry Guru
    Currently Being Moderated
    How many tables do you have? Are you dynamically creating and deleting tables? How about synonym and privilege generation, is garbage dealing with long-gone objects hanging around?

    Going back to basics: The parser has to do a semantic check to be sure the objects exist. So my wild speculation is, tab$ is full of a lot of empty space, or some latching is preventing quick access to it. This could be worsened by stats on the system confused by lots of tables coming and going. Lots of synonyms or related privilege checking could also slow down access.

    This kind of thing was a problem decades ago, perhaps your strange app has brought it back.
    select count(*) from tab$;
    select count(*) from dba_synonyms;
    Either of those take a long time or look wacky?
  • 32. Re: Hard Parse Time - too long...
    439902 Newbie
    Currently Being Moderated
    Core application schema is fixed. We do not create any tables on the fly - all of the querying is performed against a well-known object set, which consists of 74 tables and approximately 160 indexes. No stored procedures / packages are used.
    select count(*) from tab$; 
    -- 3511, 11 ms to return
    
    select count(*) from dba_synonyms;
    -- 28112, 901 ms to return
    App is connecting as the schema owner (in Oracle).

    Edited by: al**** on Oct 16, 2012 12:02 PM
  • 33. Re: Hard Parse Time - too long...
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    al**** wrote:
    I turned off dynamic sampling (set to 0), and optimizer did not do the LIKE portion of the query 5 times as before.
    Instead, the full statement itself produced the following:
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1    120.88     121.28          0          0          0           0
    Execute      1      0.02     133.13          0          5          0           0
    Fetch        2      0.04       3.98          0        206          0           1
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4    120.96     258.40          0        211          0           1
    So, parse times are still there, regardless of dynamic sampling.
    I don't often see parse times going over a few seconds, so I am sufficiently curious that if you want to send me the original trace file with the 10053 trace enabled I will take a look at it. It's about 4.5 million lines long, but if you zip it it will probably be relatively small. (Use a primitive zip progam - there are so many around i can't handle all of them)

    My email address is jonathan@jlcomp.demon.co.uk

    (The trace - above - from the second machine is very odd, too. How do you spend 133 seconds elapsed time in execution when you use virtually no CPU and do no disk I/O ? But don't send me that trace file).

    Regards
    Jonathan Lewis

    Edited by: Jonathan Lewis on Oct 16, 2012 8:12 PM
  • 34. Re: Hard Parse Time - too long...
    Mark Malakanov (user11181920) Expert
    Currently Being Moderated
    These columns you compare with HEXTORAW, like val_000002.fieldfk, what data type they are, RAW, LONG RAW or BLOB?
    do you use HEXTORAW in application, or bind variables of according type?
  • 35. Re: Hard Parse Time - too long...
    439902 Newbie
    Currently Being Moderated
    Bind variables are of the correct type - as provided by the app.

    In the database, these are defined as RAW(16).
  • 36. Re: Hard Parse Time - too long...
    871018 Explorer
    Currently Being Moderated
    What OS?
  • 37. Re: Hard Parse Time - too long...
    439902 Newbie
    Currently Being Moderated
    Linux, running on 64 bit

    Though, I believe the same is true inn Windows 7x64
  • 38. Re: Hard Parse Time - too long...
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    I've spent an hour or so looking at the trace file you sent me. The total size was about 200MB, and the elapsed time (rounded) was about 65 seconds of which about 20 seconds were spent in the recursive dynamic sample query which executed 5 times, and 45 seconds was the rest of the optimisation phase.

    The problem really is in the optimisation, and the most dramatic bit is related to predicate pushing. The optimizer processed the statement five times (which is why we saw the dynamic sampling appear 5 times). The number of join orders (of a 14 table join) examined on each pass were (shown by grep)
    Number of join permutations tried: 374
    Number of join permutations tried: 673
    Number of join permutations tried: 674
    Number of join permutations tried: 571
    Number of join permutations tried: 581
    Every join order was examined AT LEAST twice due to recosting (shown by grep)
        452 ****** Recost for ORDER BY (using index) ************
       2896 ****** Recost for ORDER BY (using join row order) *******
    The killer was then
    JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
    JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
    JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
    JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
    The last iteration ran the join permutation code twice (the 571, and 581).
    Checking the number of times that "Now joining" it looked as if a very large fraction of the join orders had been examined to just about the last table.

    Basically you did a lot of work trying to find a good path.
    Unfortunately the initial cost (1st path, with no predicate pushing) had a massive cost, and this allowed the optimizer to spend a huge amount of time finding a better path.

    I would pass this one to Oracle - but a key point it that the repeated efforts on JPPD (join predicate pushdown) accounted for a huge escalation in time. As I said earlier, if you can make this happen unconditionally you might get a much smaller optimisation time. A possible check would be to disable cost-based push down, as this might result
    in predicate pushing taking place on the first pass automaically. For test purposes only, try setting optimizerpush_pred_cost_based = false

    Regards
    Jonathan Lewis
  • 39. Re: Hard Parse Time - too long...
    439902 Newbie
    Currently Being Moderated
    Jonathan,

    THANK YOU for going through the logs.
    As you suggested, I proceeded with running the query with the optimizerpush_pred_cost_based = false.

    It resulted in the following stats. Parsing took no time at all. Execution, on the other hand, was terrible. I killed it after it ran for 3 hours :)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        4      0.00       0.00          0          0          0           0
    Execute      5      6.82       6.84          0          7          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        9      6.83       6.85          0          7          0           1
    
    Misses in library cache during parse: 2
    Misses in library cache during execute: 2
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       3        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      db file sequential read                   47594140        0.92       8036.18
      asynch descriptor resize                        3        0.00          0.00
    
    
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        9      0.00       0.00          0          0          0           0
    Execute      9      0.01       0.01          0          0          0           0
    Fetch        8      0.09       0.31        193        476          0           7
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       26      0.11       0.33        193        476          0           7
    
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
  • 40. Re: Hard Parse Time - too long...
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    al**** wrote:
    Jonathan,

    As you suggested, I proceeded with running the query with the optimizerpush_pred_cost_based = false.

    It resulted in the following stats. Parsing took no time at all. Execution, on the other hand, was terrible. I killed it after it ran for 3 hours :)
    Pity, I had hoped that the predicate pushing would occur by default. Have you tried putting the /*+ push_pred */ hint into the three inline queries where you can see the "VIEW PUSHED PREDICATE" operations in the plan.

    You know you can use the SQL monitor code (if you're licensed) to see where the time is going as the query runs. Alternatively, if you run the query with statistics_level set to all (or the hint /*+ gather_plan_statistics */ then you don't have to wait for the query to complete before you execute a call to dbms_xplan.display_cursor() with the 'allstats last' option to see where the work went. This will also show you the actual plan, so it would be interesting to see if any predicate pushing had occurred.

    There are other hidden parameters relating to pushing predicates that might be relevant and worth fiddling with.

    The other interesting thing about the 10053 is that it should your MBRC (from system statistics) is set to 1 - this is a little unusual, and may explain why Oracle keeps finding expensive plans, and therefore keeps looking for cheaper ones. An MBRC of 16 might be more realistic. How did your system stats get set ?

    Had you set the dynamic sampling level back to 4, by the way - from your previous tests it looked as if Oracle needed to sample on one of the tables to make sure it got the right combined stats. Bear in mind that for multiple equality predicates on a single table you can add extended stats - the "column group" type - to help the optimizer calculate better cardinalities. http://structureddata.org/2007/10/31/oracle-11g-extended-statistics/

    Regards
    Jonathan Lewis
  • 41. Re: Hard Parse Time - too long...
    439902 Newbie
    Currently Being Moderated
    Jonathan,

    To answer your questions:
    1. Dynamic sampling is back to 4
    2. I tried the push predicate hint and that did not affect parse times much
    3. MBCR was probably set when I did gather_system_stats with a workload. I have since deleted it, and reset the stats to default (noworkload).
    4. I experimented with the spincount, setting it at 6000 (empirical), in hopes that latching might be helped

    Interestingly enough, the same (long parse) behavior is seen on another Linux machine that had no tweaks of this sort and on a Windows machine.

    I cleaned up all of the params, so now these are the only ones that are non-default:
    name     value
    _spin_count     6000
    audit_file_dest     <some value>
    audit_trail     DB
    compatible     11.2.0.0.0
    control_files     <some value>
    core_dump_dest     <some value>
    db_block_size     8192
    db_name     <some value>
    db_recovery_file_dest     <some value>
    db_recovery_file_dest_size      1,825,361,100,800 
    db_writer_processes     4
    diagnostic_dest     /u01/app/oracle
    dispatchers     (PROTOCOL=TCP) (SERVICE=<some value>)
    fast_start_parallel_rollback     FALSE
    filesystemio_options     SETALL
    job_queue_processes     1000
    log_buffer     11665408
    open_cursors     1200
    optimizer_dynamic_sampling     4
    parallel_degree_limit     CPU
    parallel_degree_policy     LIMITED
    parallel_max_servers     264
    parallel_servers_target     128
    pga_aggregate_target     41457549312
    processes     500
    remote_login_passwordfile     EXCLUSIVE
    result_cache_max_size     32768
    session_cached_cursors     50
    sessions     784
    undo_tablespace     UNDOTBS1
    user_dump_dest     <some value>
    This way, there are no hidden ones/unknowns...

    I can manually set MBRC to see if it makes a dent.

    Thank you!

    --Alex                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
1 2 3 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points