This content has been marked as final. Show 24 replies
It's really hard for me to comment by just seeing the stats. BTW, from where did you get all the stats ? is it tkprof or statspack or awr. I would also request you to post your package on the board.
its from AWR. and the package is huge...not possible to post here (privacy reasons).
With just general stats and no code, it's basically impossible to offer any suggestions.
If the bad performance happened when you promoted new code to prod, that would seem to imply that the new code is at fault. That would mean that you'd have to tune that code. Which probably starts with tracing a session, analyzing the trace, and finding out where the code is spending its time.
Of course, this would also imply that you need to assess the performance testing that goes into your code promotion process to make sure that poorly performing code doesn't get promoted in the future.
Just post the output of following query
sql > SELECT disk_reads, executions, disk_reads/executions, address, sql_text FROM v$sqlarea WHERE disk_reads > 5000 ORDER BY disk_reads;
sql > SELECT buffer_gets, executions, buffer_gets/executions, address, sql_text FROM v$sqlarea WHERE buffer_gets > 10000 ORDER BY buffer_gets;
WHat is in pl/sql package?
Thanks Justin, Code is being discussed with development team now. We are finding ways to tune the code.
Maran, Package is used for ETL tasks. Lots of table reads, writes and data manipulations.
Thanks all, I'll update soon on this matter.
I want to know a general thing about Stored Procedures. When a Stored Procedure is executed, lots of things happen (several selects, inserts, updates, calling other SPs, data processing etc.), Is there any method (e.g. STATS collection) available by which I can find that out of total execution time, at what place in the code Stored Procedure is spending most of its time?
Database - Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
OS- Win XP Pro SP2
tool- SQL Developer, TOAD
hi All,Tracing a session
I want to know a general thing about Stored
Procedures. When a Stored Procedure is executed, lots
of things happen (several selects, inserts, updates,
calling other SPs, data processing etc.), Is there
any method (e.g. STATS collection) available by which
I can find that out of total execution time, at what
place in the code Stored Procedure is spending most
of its time?
dbms_monitor.session_trace_enable()will give you a trace file that you run through tkprof and will detail which statement(s) are responsible for most of the time. Sprinkling your code with calls to dbms_application_info to set meaningful application information will allow this information to show up interactively in v$session. Both are documented in the performance tuning guide.
call your proc
Message was edited by:
Thanks Justine and Niall. Let me try the solutions you provided. I'll get back on this.
Use PL/SQL profiler from TOAD first.
Click on the column heading to sort on time-spent and number-of-executions (or something like that) and it will clearly identify where your priorities are.
By investigation of the package code, it was found that there is a select query which is taking a big time to execute (around 2 hrs.), fetching around thousand records out of millions. Query has 3 tables A, B and C. All tables have composite primary keys (Table A - 5 cols, B- 5 cols, C- 3 cols). A and B are joined using 4 cols and A and C are joined using 1 col. There is no additional index on any table.
Now, the explain plan showes Full Table Scan for this query, which, I think, is the cause of high execution time.
Please throw some light on how index can be utilized or some other way?
The performance killer is usually I/O. I/O is slow and thus very expensive performance wise.
Your goal in creating optimal and scalable PL/SQL code is to only do the minimal amount of I/O needed. Hit a data set once only - do not make multiple passes through a data set. Maximise SQL. Minimise PL/SQL.
There are sometimes design mistakes made in ETL-type code by duplicating what Oracle already done very well. For example, using a so-called PL/SQL "table" (array) to cache data. Or a GTT (global temp table) to cache data. Oracle already has a sophisticated db buffer cache that is extremely capable and very scalable. If you manually "cache" data in your code, make sure that your assumptions are correct and that this is indeed better.
Another mistake I often see is not using bulk processing. Reducing the context switching between PL/SQL and SQL can reduce cursor-loop-fetch overheads and increase such a loop's performance by at least 3 fold.
The bottom line is that you need to think "data sets!" when designing and coding using Oracle. Deal with sets of data. Let the SQL engine do all the hard work. Do not go row-by-row processing in PL/SQL as that is slow-by-slow.
So, what you are actualy saying is that you are joining a multi-to-multi relation instead of an one-to-multi.
I also saw this problem several times. (sometimes it is necessary to redesign and use an intersection table)
Anyway, maybe there are already some things you can do to minimize the problem
- Always make sure that at least the primary keys and foreign keys are indexed.
- because your application is not using the primary keys / foreign keys, it is possible that you also need to create a index over the fields that are joined together. (which oracle version are you using?)
Meaning create an index on these columns/tables:
A and B are joined using 4 cols:
Create index index1 on A (FieldA1, FieldA2, FieldA3, FieldA4);
Create index index2 on B (FieldB1, FieldB2, FieldB3, FieldB4);
and A and C are joined using 1 col.
Create index index3 on A (FieldA5);
Create index index4 on C (FieldC1);
One more thing; it might pay of to gather statistics and histograms on the tables (for the indexed columns in this case)
With kind regards