This content has been marked as final. Show 17 replies
It depends is the answer and here is why.
I had a problem with high sequential reads. In my explain plan it was using indexes. After further investigation, my bind variables that I thought were good were causing a bad execution plan to get stored. When Oracle sees the statement the first time, there is a bind. But it peeks at the bind variable and says, "based on stats and this variable, I should use an index here." It then stores that execution plan from here on out. If the next values determine "I should full table scan" Oracle will not know that since it will use the same execution plan. Solution was not to use binds and set cursor_sharing to similar. This is one of the problems with bind variable peeking. I'd google it for more information.
However, this may not be your issue, I don't know. Update your statistics (exec dbms_stats.gather_database_stats;), and then rerun to see if it runs faster. You could have stale stats. That is usually what I will do to see if it resolves something. I will then compare explain plans before and after on high consuming SQL statements to see what changed? Maybe I have a bad histogram or something? Maybe I need to turn stats off of a table to cause dynamic sampling which will cause Oracle to take a small increment of time (depending on my degree in dynamic sampling) and sort of estimate what the stats should be on the table.
If you want to put your AWR in text mode and paste some of it here to see, that might identify the SQL statements. If you can post an explain plan of that statement and the stats on the underlying objects, we might be able to help. Maybe do a select count(*) on each of the underlying tables to see what might be going on.
One way to really identify the issue is do a level 12 sql trace (binds) and then use a tool like Oracle trace analyzer (metalink and take a while to run), tkprof (not as detailed) or the hotsos profiler (the best analyzer out there and you can get a free trial from them and it runs much faster compared to the oracle trace analyzer) to look at your binds and waits to find out what is happening and where your time is going. Just a thought?
The problem with AWR is it lets you see sumaries of your system wide stats. If you can get a sql trace of the process that is running slow, you can find out EXACTLY what it is waiting for and all the binds.
It depends. I've worked in shops there 32k blocks have done wonders, but they are not right for everyone. Multiple blocksizes are used in ALL the Oracle10g TPC-C benchmarks because they allow far faster transaction execution with less I/O. These vendors spent hundreds of thousands of dollars to get the fastest performance and their choice of multiple blocksizes speaks for itself.
Is the creation of this large non-standard block size tablespace a bad idea?
UNISYS benchmark: 250,000 transactions per minute.
db_16k_cache_size = 15010M
db_8k_cache_size = 1024M
db_cache_size = 8096M
db_keep_cache_size = 78000M
HP benchmark: 1,000,000 transactions per minute.
db_cache_size = 4000M
db_recycle_cache_size = 500M
db_8k_cache_size = 200M
db_16k_cache_size = 4056M
db_2k_cache_size = 35430M
I have notes here:
Hope this helps. . .
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
The current block size of this tablespace is 8K. I amYou are having the "single-block read" as your top event - why would you want even more rows fetched this way? Maybe you interpreted the event in a different way?
thinking of creating a tablespace with non-standard
block size of 16k or 32k so that more rows will
fetched single block reads and moving all segments
into this tablespace.
A larger block could make things worse. The worse case is when the "db file sequential read" fetches a block only to access a single key value in that block. The rest of the block, being of no interest is effectively "wastage". The bigger the block the bigger the wastage.
It would be better to track down the SQL(s) causing the waits and see what tuning you can do. The event parameters p1 and p2 will tell exactly which index is being accessed.
Thank you very much again for your detailed answer.
Thank you. I have your great book "Oracle Tuning: The Definitive Reference" right in front of me. Just going through 'Oracle Direct I/O' .
Under 'Enabling Direct I/O with Kernel Parameters' you have missed HP (all others AIX, SOLARIS, Windows, Linux are mentioned). Not that i like HP, my favourite OS has always been Solaris.
Going to buy your other book 'Easy Oracle PL/SQL programming' this week.
Thank you very much.Keep up the good work.
It is a typical Silver Bullet answer, as so often advocated by Don Burleson.
(In this thread he advocated it again).
Please be aware Oracle reads blocks, not records.
So if you have a db sequential read while reading a leaf block for an unique index, and you have bumped up your blocksize to 32k, you read 32k to retrieve only a few bytes. You might never need the rest.
Next step is of course buying Solid State Disks, and loads of extra RAM to circumvent this.
Senior Oracle DBA
Wilhelm, in a way, yes, that is what you want to do.
Let's look at it like this.
Suppose we are joining 3 tables.
Table 1 has 1 million rows
Table 2 has 2000 rows
Table 3 has 1000 rows.
Lets suppose you join table 1 to table 2 and table 2 to table 3.
All columns that are joing these tables have indexes on them. (4 indexes)
Now let's assume you also have in your where clause to limit Table 3 to where it only returns 20 rows. Now if you use binds this is what will happen.
Oracle will use Table 3 as your driving table. It will then use that result set to go through table 2 (again dependong on the amount of rows returned) and lets say it will get 10 rows (again use the index). Then if that returns 100,000 rows, it will probably do a full table scan against Table 1 and voila that is your result set.
Now lets assume you pass another query in but on Table 3 you will return 300 rows, you should have used a full table scan but oh, you are using indexes (because of the cached execution plan), so now you have more work than you need. Sequential reads in this case are more expensive but Oracle doesn't know that (because of the cached execution plan). LIO's will be high. Physical IO's will also be high depending on the clustering factor unless it is already in the buffer cache. This can also blow out your buffer cache unless you are doing things in parallel. You get the idea.
So one, if you are using binds, see if you have a peeking problem. The other thing you can do is always make sure you use where clauses where you can to reduce the amount of rows (blocks) Oracle has to go through. Also be weary of block sizes. If you go with a 32k block size, Oracle has to read more data in to get data. This can be a pro and a con. If you need one row but you are using 32k blocks, then you have to read in one HUGE block(still 1 I/O if your hardware / OS support it) but it also means more memory etc. However, if you have to read in 1000 rows, and suppose 20 rows on average fit into an 8k block, then you would have to read in 50 blocks. If you have 80 rows fit in a 32k block, then you only have to read in about 13 blocks (if the rows are all together). So you can see where this would help you in a DW environment but probably kill you in an OLTP. The good thing about Oracle is you can have tablespaces with different block sizes depending on how your tables are used. For bulk operations, the big block sizes can help you. For the OLTP type of work (a bunch of small transactions happening simultaneously) then small blocks do better, especially since smaller amounts of data get locked and fetched into memory, etc.
Large block size should be used only under restricted environment. Don't even think about it under regular OLTP system. Oracle's standard 8K block size is almost industry standard and no reason to change it.
Small block size has read overhead and large block size has write overhead(buffer contention), so 8K seems to be just the equilibrium.
When db file sequential read wait time is involved, you should mention the physical I/O distribution. How are your datafiles distributed? If your problematic datafiles are located on same device and you have large amout of I/O request on that device, your db file sequential read wait time is increased. It's natural and inevitable.
If the problem is physical I/O contention, there is no way for Oracle to solve it. Large block size, MBRC, datafile-level distribution? All useless.
The only way is to distribute your datafiles to independent devices.
(If you're on SAN, you may be alread doing this)
One of my customer once had serious db file sequential read wait problem. What they've done? The scattered the table(using partition) to 16 independent devices and the problem gone.
To summarize, you should consider two scenarios.
1) Tune Oracle - Tune SQL, block size(?), ...
2) Tune physical devices - I/O distribution
First you need a license to partition.
Once you clarify that, then yes it "CAN" help you. When Oracle favors a full tablescan over an index, it has to scan every block the table resides in. If you partition and it can somehow eliminate the partitions (called pruning by using your predicate or "WHERE" clause) then it will only full scan that partition. So if you had a table fit in 1000 blocks but the partition you needed to scan was only 20 blocks and it was able to prune down to that before scanning, then it only scans 20 blocks. There is a small overhead with partitionining but for what you get by reducing the amount of IO and work signifigantly, it more than pays off.