1 2 Previous Next 21 Replies Latest reply on Jan 9, 2018 8:56 PM by AndrewSayer Go to original post
      • 15. Re: Query Changing Execution path during Run time
        Martin Preiss

        just for the sake of completeness - here is the link I mentioned before: How does the CBO choose an index when the COST is the same for 2 indexes? But as Jonathan Lewis explained there the alphabetical order is only a subsequent tie breaking element after the number of distinct keys.

         

        As Mohamed I would suggest the use of a SQL Plan Baseline to pin the plan to the use of the fitting index and plan.

        1 person found this helpful
        • 16. Re: Query Changing Execution path during Run time
          933257

          Agreed. As i understood in my case the CBO is opting for best possible plan and the stats are also representing the data correctly. But the issue happens when some input values comes outside the possible date range, so optimizer thinks it can better filter out the data faster with another index(starting with MDT) and choosing that one for subsequent many executions without getting hardparsed the query. so a hard parse might help that query put back in better execution path for new binds(all those dates within range), during such situation but that is not happening and yes ofcourse we cant really make it force hard parse as its executing ~1million+ times few days.

          • 17. Re: Query Changing Execution path during Run time
            933257

            My though was in this type of scenarios, Adapative cursor sharing will be triggered and will hardparse the query to make the plan again back to good one. Not sure if my understanding is correct here.

            • 18. Re: Query Changing Execution path during Run time
              AndrewSayer

              933257 wrote:

               

              Agreed. As i understood in my case the CBO is opting for best possible plan and the stats are also representing the data correctly. But the issue happens when some input values comes outside the possible date range, so optimizer thinks it can better filter out the data faster with another index(starting with MDT) and choosing that one for subsequent many executions without getting hardparsed the query. so a hard parse might help that query put back in better execution path for new binds(all those dates within range), during such situation but that is not happening and yes ofcourse we cant really make it force hard parse as its executing ~1million+ times few days.

              You could alternatively remove the risk of the out of range issue and remove the low and high values from your stats completely.

               

              Something like

               

              declare

                tabname VARCHAR2(30) := 'TABLE_NAME';

                colname VARCHAR2(30) := 'COLUMN_NAME';

               

               

                distcnt NUMBER;

                density NUMBER;

                nullcnt NUMBER;

                srec    dbms_stats.StatRec;

                avgclen NUMBER;

              begin

                dbms_stats.get_column_stats (ownname    => NULL

                                            ,tabname    => tabname

                                            ,colname    => colname

                                            ,distcnt    => distcnt

                                            ,density    => density

                                            ,nullcnt    => nullcnt

                                            ,srec       => srec   

                                            ,avgclen    => avgclen

                                            );

                srec.minval := null;

                srec.maxval := null;

                dbms_stats.delete_column_stats(ownname  => NULL

                                              ,tabname  => tabname

                                              ,colname  => colname

                                              );

                dbms_stats.set_column_stats (ownname  => NULL

                                            ,tabname  => tabname

                                            ,colname  => colname

                                            ,distcnt  => distcnt

                                            ,density  => density

                                            ,nullcnt  => nullcnt

                                            ,srec     => srec   

                                            ,avgclen  => avgclen

                                            );

              end;

              1 person found this helpful
              • 19. Re: Query Changing Execution path during Run time
                Jonathan Lewis

                You could alternatively remove the risk of the out of range issue and remove the low and high values from your stats completely.

                 

                I've always viewed that as a horrid and dangerous hack.  As far as I'm concerned it's okay to overwrite the stats with something that the database could, in principle, have reached by itself, but putting the stats into a state which could never be reached by any legal method is foolhardy in the extreme.  Given the many possibly ways in which a column may be used it's also unlikely that there will be no unexpected side effects.  (And in the simplest range-based cases, where the low/high-value issue seems to appear most frequently - the optimizer uses 5% for the cardinality estimate which tends to be much too big for the typical use and gets bigger as time passes.)

                 

                Regards

                Jonathan Lewis

                1 person found this helpful
                • 20. Re: Query Changing Execution path during Run time
                  933257

                  Agreed Jonathan. The high_value and low_value must be someway influncing other plans or in future may be needed for optimizer to reachout to a better cardinality estimation tfor a different query using this table/column there by better execution path, so making those NULL may impact us negatively.

                   

                  As i mentioned before, this sql is actually generated through golden gate when in source side some DELETE executed on the table(like. DElete from tab where mdt>sysdate-200) having no Primary key, So in target golden gate converts that sql as below and giving performance issue for us.

                  Delete from tab where  col1=:b1, col2=:b2, col3=:b3, …) and rownum=1 (… for all columns)

                   

                  So in above scenario having a profile attached to the sql with good plan, at target database should work for us, But i am worried if by anyway the source sql/delete changes target sql  may again change by the golden gate and the profile wont work then. So wondering how to control this and if any other way we should follow to fix this issue?

                  • 21. Re: Query Changing Execution path during Run time
                    AndrewSayer

                    933257 wrote:

                     

                    Agreed Jonathan. The high_value and low_value must be someway influncing other plans or in future may be needed for optimizer to reachout to a better cardinality estimation tfor a different query using this table/column there by better execution path, so making those NULL may impact us negatively.

                     

                    As i mentioned before, this sql is actually generated through golden gate when in source side some DELETE executed on the table(like. DElete from tab where mdt>sysdate-200) having no Primary key, So in target golden gate converts that sql as below and giving performance issue for us.

                    Delete from tab where col1=:b1, col2=:b2, col3=:b3, …) and rownum=1 (… for all columns)

                     

                    So in above scenario having a profile attached to the sql with good plan, at target database should work for us, But i am worried if by anyway the source sql/delete changes target sql may again change by the golden gate and the profile wont work then. So wondering how to control this and if any other way we should follow to fix this issue?

                    Yeah, it is a hacky risky suggestion, and I should have thought more about other consequences.

                     

                    What would be the impact of giving the source table a primary key?

                    Or..

                    If deletes from the source table are only done in a controlled fashion, could you just ignore the deletes (configuring somehow with goldengate) and then schedule the same on the target side?

                    1 2 Previous Next