1 2 Previous Next 15 Replies Latest reply on Nov 30, 2011 10:18 PM by Randolf Geist Go to original post
      • 15. Re: improve sql statement - decrease executing time
        Randolf Geist
        marco wrote:
        I've got no tkprof, I use oraSRP instead. Please find my session resource profile using link below, hope it helps.


        As it exceeds the maximum length of 30000 characters, I can't post it as plain text here.
        Please, be so kind, help me improve performance.
        Unfortunately the information you've provided is inconsistent:

        - The execution plan posted doesn't correspond to the SQL posted
        - The trace file covers several executions of similar but different statements (you would need to use a different option to OraSRP to have them shown as separate statements)
        - None of these trace infos seem to cover an execution that took more than 3000 seconds (the one I saw took 11 seconds and did no physical I/O at all)

        Therefore it's hard to provide reasonable help.

        Ideally what you should do is follow the provided links and read them through carefully. The most relevant information you could post is an execution that really takes that long, but do a runtime profile by using at least the GATHER_PLAN_STATISTICS hint or by setting STATISTICS_LEVEL to ALL in your session where you run the statement.

        Then call DBMS_XPLAN.DISPLAY_CURSOR with the "ALLSTATS LAST" formatting option and post the result here. If you need further instructions / advice how to do that, read the links provided. It's all there.

        Since you are already on If you happen to have an Enterprise Edition license plus Diagnostic plus Tuning Pack license then you could simply get a Real Time SQL Monitoring report, no need for further separate runtime profiling. Provided that the report is still available from the Shared Pool you can generate it simply via DBMS_SQLTUNE.REPORT_SQL_MONITOR. Ideally you should use the "ACTIVE" report type which contains the most detailed level of information.

        You can then provide the generated HTML file - it's self containing and can simply be shared.

        Another good thing about these methods is that you're not depending on your DBA to provide any trace files or such - provided you have the privileges to the corresponding V$ views you can generate all these reports yourself without any further assistance.

        From looking at the information provided so far, Charles has already made some good comments.

        In particular the optimizer is certainly mislead by predicates like 'BETWEEN (select arcdate from datbeg) AND (select arcdate from datend)". If there is really a need to provide those filter predicates in such a way, then you should at least join the "datbeg" and "datend" views to the main query and use a predicate like this instead: "BETWEEN datbeg.arcdate AND datend.arcdate". This way the optimizer will be able to pick up the date constants used in those "datbeg" and "datend" views. Of course using straightforward date literals in those between clauses will be the best option.

        There are three EXISTS clauses in the SQL that corresponds the execution plan posted (and that is shown in the trace file) but only two of them are unnested by the optimizer. A third one runs as early filter and that is the one that potentially causes you trouble. Let's see if more accurate cardinality estimates will allow the optimizer to unnest the third one as well - if not you'll have to find out if there is a reason why the optimizer can't/doesn't want to unnest this one.

        But ideally this discussion should be based on a report that shows where the 3000 seconds are spent.

        Hope this helps,
        1 2 Previous Next