1 person found this helpful
you could start with a look at v$ses_optimizer_env for the session to determine if there are special optimizer settings for the session used by app1.exe on test. I would also take a look at the cost of the execution of app1.exe and compare it with the cost of the sqlplus plan (or whatever tool you used to generate the plan on test). That would give you an idea if there is only a small difference in costing that may come in play when there are small statistic changes - or if the costing is completly different. Perhaps you could also add both plans here.
1 person found this helpful
I doubt if your query is really on the EMP table. However, I assume that it does use a Bind Variable --- is it only a single Bind Variable or multiple Binds ?
Bind Peeking and Adaptive Cursor Sharing do play a role in determining and changin execution plans in 11g. (You should preferably be on 22.214.171.124 [or 126.96.36.199 now] instead of 188.8.131.52).
When you do an Explain Plan, the Binds are not peeked. So the Explain Plan output may not be the real runtime Execution Plan. You should learn to use DBMS_XPLAN.DISPLAY_CURSOR to see the runtime Execution Plan.
Hemant K Chitale
one of the reason could be data type conversion. are these table has same data type on "empid" column. for ex if column data type is varchar2 but contains numbers and you write sql like this
select * from emp where empid /* actually varchar2 column */ = 100;
could cause full table scan.
Is there a data skew on the predicated column? do you have column histograms in either environment? Is the index valid in test? is the table partitioned?
Database vault can cause plan shifts due to additional predicates being added, but you would see that in your plan.
Please post up the in use execution plans (on 11g, I'm guessing there is probably more than one), relevant statistics from the table.
Your post is slightly confusing in that you say you have the same execution plan in test and prod, and then tell us that prod is using an indexed look up but test is using a full table scan.
I thank you all...
My boss asked the oracle consultants to investigate the issue and the consultants said that gather_schema stats was not run at that time.
Actually I did not put the checklist to gather_schema stats because it is part of the app1.exe process to run it after every data load (once a day).
I presumed that the stats are updated. Maybe there has been some missed process and the stats was not gathered.
BUT anyways.....during the time when the batch app1.exe was enacountering issues on full table scanning EMP, I issued >"analyze table EMP compute statistics;"
but to no avail. Is this command valid? Or maybe it did not work at all? Could it have been helped the issue?
Generating stats after every load from a client application is not a very effective method of stats management...we can probably have a chat about more effective ways to achieve plan stability if you like. ANALYZE TABLE is old school, depending on what version you are on. Which version are you on?
if you use oracle 10g and above, use DBMS_STATS.GATHER_TABLE_STATS procedure for statistics.
Thanks dear....I am using 11g ...so my command is invalid?
I run now > EXEC DBMS_STATS.gather_database_stats(estimate_percent => 100, cascade => TRUE, degree=>2);
And the performance seems ok, but the above command took so long to complete due to 100% stats.
Is it acceptable if I changed this to 20% or 10%? What is the minimum percentage acceptable?
in many cases the auto_sample_size should be almost as good as 100%: http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/. One of the cases where auto_sample_size is not that good is histogram creation.
wooowww. gather_database_stats is a really big operation do not run it daytime or while system is loaded. if command is completed then you do not worry anymore. oracle will collect data regularly (if maintenance tasks are not disabled). you don't have to run this all the time, oracle collects statistics if a table has chanced too much. for now this is enough.
also, your command "analyze" is not invalid. it also gather some info that dbms_Stats does not but dbms_stats gather much more statistics information so user DBMS_STATS.
I thank you all...
I want to prove that 10% gathered stats is as good as 100% , I have just completed the 100%. If I run it again and use the 10% parameter, WILL IT OVERRIDE the 100%.
So that I can validate my test?
By the way I want to test also the auto_sample. If I run this command:
Will it overwrite the existing 100%?
the clause for the auto_sample_size would be: estimate_percent => dbms_stats.auto_sample_size. But that's also the default - if you have not changed it.
The new statistcs gathering will overwrite the previous results. To check the differences between different gatherings you could use the diff_table_stats% procedures in dbms_stats: https://blogs.oracle.com/optimizer/entry/how_do_i_compare_statistics.
But of course you should do this kind of tests not in a prod system.
Under 11g you can use "pending stats". Using this method, the newly collected stats will not overwrite the live stats or invalidate any execution plans.
dbms_stats.set_table_prefs('schema name','table name','PUBLISH','false')
To test the new stats you can enable them in a single session:
alter session set optimizer_use_pending_statistics = TRUE
after running your performance benchmark tests to check for adequate performance, you can then publish the gathered stats:
dbms_stats.publish_pending_stats('schema name','table name')
and set your stat mode back to normal:
dbms_stats.set_Table_prefs('schema name','table name','PUBLISH','true')