This discussion is archived
1 2 Previous Next 17 Replies Latest reply: Oct 29, 2007 5:55 AM by 108476 RSS

Db file sequential read

Wilhelm Newbie
Currently Being Moderated
Due to high Disk i/o we took an AWR report.

The first of Top 5 Events was db file sequential read. So i looked up 'db file sequential read' in Oracle Documentation. It says

'A sequential read is a single-block read'.Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache. These waits would also show up as 'db file sequential read'.


From Tablespace I/O stats and File I/O stats of the AWR i came to know that the most number of Physical Reads are from datafiles of one particular Tablespace. So i assumed that the single block reads(which caused 'db file sequential read') are happening in this tablespace.

The current block size of this tablespace is 8K. I am thinking of creating a tablespace with non-standard block size of 16k or 32k so that more rows will fetched single block reads and moving all segments into this tablespace.

But i read in the below mentioned link that this might be a bad idea eventhough the author hasn't mentioned any particular reason:

"This feature should not be used as a performance feature and tempt the dba's to create larger block size tablespaces so that it reads more number of rows in one I/O"

http://viveklsharma.blogspot.com/2007/05/effect-of-non-standard-block-sizes-on.html

Is the creation of this large non-standard block size tablespace a bad idea?
  • 1. Re: Db file sequential read
    433180 Newbie
    Currently Being Moderated
    It depends is the answer and here is why.

    I had a problem with high sequential reads. In my explain plan it was using indexes. After further investigation, my bind variables that I thought were good were causing a bad execution plan to get stored. When Oracle sees the statement the first time, there is a bind. But it peeks at the bind variable and says, "based on stats and this variable, I should use an index here." It then stores that execution plan from here on out. If the next values determine "I should full table scan" Oracle will not know that since it will use the same execution plan. Solution was not to use binds and set cursor_sharing to similar. This is one of the problems with bind variable peeking. I'd google it for more information.

    However, this may not be your issue, I don't know. Update your statistics (exec dbms_stats.gather_database_stats;), and then rerun to see if it runs faster. You could have stale stats. That is usually what I will do to see if it resolves something. I will then compare explain plans before and after on high consuming SQL statements to see what changed? Maybe I have a bad histogram or something? Maybe I need to turn stats off of a table to cause dynamic sampling which will cause Oracle to take a small increment of time (depending on my degree in dynamic sampling) and sort of estimate what the stats should be on the table.

    If you want to put your AWR in text mode and paste some of it here to see, that might identify the SQL statements. If you can post an explain plan of that statement and the stats on the underlying objects, we might be able to help. Maybe do a select count(*) on each of the underlying tables to see what might be going on.

    One way to really identify the issue is do a level 12 sql trace (binds) and then use a tool like Oracle trace analyzer (metalink and take a while to run), tkprof (not as detailed) or the hotsos profiler (the best analyzer out there and you can get a free trial from them and it runs much faster compared to the oracle trace analyzer) to look at your binds and waits to find out what is happening and where your time is going. Just a thought?

    The problem with AWR is it lets you see sumaries of your system wide stats. If you can get a sql trace of the process that is running slow, you can find out EXACTLY what it is waiting for and all the binds.
  • 2. Re: Db file sequential read
    C.Boutet Newbie
    Currently Being Moderated
    Thank you very much Troach. Good to know about Hotsos Profiler. I was looking for something like this.
  • 3. Re: Db file sequential read
    108476 Journeyer
    Currently Being Moderated
    Hi,
    Is the creation of this large non-standard block size tablespace a bad idea?
    It depends. I've worked in shops there 32k blocks have done wonders, but they are not right for everyone. Multiple blocksizes are used in ALL the Oracle10g TPC-C benchmarks because they allow far faster transaction execution with less I/O. These vendors spent hundreds of thousands of dollars to get the fastest performance and their choice of multiple blocksizes speaks for itself.



    UNISYS benchmark: 250,000 transactions per minute.



    http://www.tpc.org/results/FDR/TPCC/unisys_es7000-420_291K_040209_fdr.pdf





    db_16k_cache_size = 15010M

    db_8k_cache_size = 1024M

    db_cache_size = 8096M

    db_keep_cache_size = 78000M



    HP benchmark: 1,000,000 transactions per minute.



    http://www.tpc.org/results/FDR/TPCC/HP%20Integrity%20rx5670%20Cluster%2064P_FDR.pdf



    db_cache_size = 4000M

    db_recycle_cache_size = 500M

    db_8k_cache_size = 200M

    db_16k_cache_size = 4056M

    db_2k_cache_size = 35430M


    I have notes here:

    http://www.dba-oracle.com/oracle_tips_multiple_blocksizes.htm

    Hope this helps. . .

    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm
  • 4. Re: Db file sequential read
    ben23 Newbie
    Currently Being Moderated
    The current block size of this tablespace is 8K. I am
    thinking of creating a tablespace with non-standard
    block size of 16k or 32k so that more rows will
    fetched single block reads and moving all segments
    into this tablespace.
    You are having the "single-block read" as your top event - why would you want even more rows fetched this way? Maybe you interpreted the event in a different way?

    A larger block could make things worse. The worse case is when the "db file sequential read" fetches a block only to access a single key value in that block. The rest of the block, being of no interest is effectively "wastage". The bigger the block the bigger the wastage.

    It would be better to track down the SQL(s) causing the waits and see what tuning you can do. The event parameters p1 and p2 will tell exactly which index is being accessed.
  • 5. Re: Db file sequential read
    Wilhelm Newbie
    Currently Being Moderated
    Troach,
    Thank you very much again for your detailed answer.


    Don,
    Thank you. I have your great book "Oracle Tuning: The Definitive Reference" right in front of me. Just going through 'Oracle Direct I/O' .

    Under 'Enabling Direct I/O with Kernel Parameters' you have missed HP (all others AIX, SOLARIS, Windows, Linux are mentioned). Not that i like HP, my favourite OS has always been Solaris.

    Going to buy your other book 'Easy Oracle PL/SQL programming' this week.

    Thank you very much.Keep up the good work.
  • 6. Re: Db file sequential read
    153119 Pro
    Currently Being Moderated
    You have been proven wrong on this one quite recently (as you have been proven wrong several times by Jonathan Lewis and Tom Kyte), so why do you continue to post these 'Silver Bullets'?

    On other than that:
    Would you PLEASE STOP misusing OTN as a MARKETING tool?

    --
    Sybrand Bakker
    Senior Oracle DBA
  • 7. Re: Db file sequential read
    153119 Pro
    Currently Being Moderated
    It is a typical Silver Bullet answer, as so often advocated by Don Burleson.
    (In this thread he advocated it again).
    Please be aware Oracle reads blocks, not records.
    So if you have a db sequential read while reading a leaf block for an unique index, and you have bumped up your blocksize to 32k, you read 32k to retrieve only a few bytes. You might never need the rest.
    Next step is of course buying Solid State Disks, and loads of extra RAM to circumvent this.

    --
    Sybrand Bakker
    Senior Oracle DBA
  • 8. Re: Db file sequential read
    Wilhelm Newbie
    Currently Being Moderated
    So you want the query to be re-written in such a way that it will access only fewer blocks?
  • 9. Re: Db file sequential read
    Wilhelm Newbie
    Currently Being Moderated
    Thank you very much User 599375.
  • 10. Re: Db file sequential read
    433180 Newbie
    Currently Being Moderated
    Wilhelm, in a way, yes, that is what you want to do.

    Let's look at it like this.

    Suppose we are joining 3 tables.

    Table 1 has 1 million rows

    Table 2 has 2000 rows

    Table 3 has 1000 rows.

    Lets suppose you join table 1 to table 2 and table 2 to table 3.

    All columns that are joing these tables have indexes on them. (4 indexes)

    Now let's assume you also have in your where clause to limit Table 3 to where it only returns 20 rows. Now if you use binds this is what will happen.

    Oracle will use Table 3 as your driving table. It will then use that result set to go through table 2 (again dependong on the amount of rows returned) and lets say it will get 10 rows (again use the index). Then if that returns 100,000 rows, it will probably do a full table scan against Table 1 and voila that is your result set.

    Now lets assume you pass another query in but on Table 3 you will return 300 rows, you should have used a full table scan but oh, you are using indexes (because of the cached execution plan), so now you have more work than you need. Sequential reads in this case are more expensive but Oracle doesn't know that (because of the cached execution plan). LIO's will be high. Physical IO's will also be high depending on the clustering factor unless it is already in the buffer cache. This can also blow out your buffer cache unless you are doing things in parallel. You get the idea.

    So one, if you are using binds, see if you have a peeking problem. The other thing you can do is always make sure you use where clauses where you can to reduce the amount of rows (blocks) Oracle has to go through. Also be weary of block sizes. If you go with a 32k block size, Oracle has to read more data in to get data. This can be a pro and a con. If you need one row but you are using 32k blocks, then you have to read in one HUGE block(still 1 I/O if your hardware / OS support it) but it also means more memory etc. However, if you have to read in 1000 rows, and suppose 20 rows on average fit into an 8k block, then you would have to read in 50 blocks. If you have 80 rows fit in a 32k block, then you only have to read in about 13 blocks (if the rows are all together). So you can see where this would help you in a DW environment but probably kill you in an OLTP. The good thing about Oracle is you can have tablespaces with different block sizes depending on how your tables are used. For bulk operations, the big block sizes can help you. For the OLTP type of work (a bunch of small transactions happening simultaneously) then small blocks do better, especially since smaller amounts of data get locked and fetched into memory, etc.
  • 11. Re: Db file sequential read
    433180 Newbie
    Currently Being Moderated
    sybrand, play nice. Don said it depends and he is right. He also gave good links. I'm not sure if his was the right answer or mine but you don't need to be so negative. All feedback is good feedback. Even yours and mine : )
  • 12. Re: Db file sequential read
    601585 Oracle ACE
    Currently Being Moderated
    Large block size should be used only under restricted environment. Don't even think about it under regular OLTP system. Oracle's standard 8K block size is almost industry standard and no reason to change it.
    Small block size has read overhead and large block size has write overhead(buffer contention), so 8K seems to be just the equilibrium.

    When db file sequential read wait time is involved, you should mention the physical I/O distribution. How are your datafiles distributed? If your problematic datafiles are located on same device and you have large amout of I/O request on that device, your db file sequential read wait time is increased. It's natural and inevitable.
    If the problem is physical I/O contention, there is no way for Oracle to solve it. Large block size, MBRC, datafile-level distribution? All useless.
    The only way is to distribute your datafiles to independent devices.
    (If you're on SAN, you may be alread doing this)

    One of my customer once had serious db file sequential read wait problem. What they've done? The scattered the table(using partition) to 16 independent devices and the problem gone.

    To summarize, you should consider two scenarios.
    1) Tune Oracle - Tune SQL, block size(?), ...
    2) Tune physical devices - I/O distribution
  • 13. Re: Db file sequential read
    Wilhelm Newbie
    Currently Being Moderated
    Troach, Syb, Don, Dion,
    I am certainly going to look into ways of rewriting the query as Troach and Syb has said. What about partitioning as Dion has mentioned? Will partitioning work for this scenario?
  • 14. Re: Db file sequential read
    433180 Newbie
    Currently Being Moderated
    First you need a license to partition.

    Once you clarify that, then yes it "CAN" help you. When Oracle favors a full tablescan over an index, it has to scan every block the table resides in. If you partition and it can somehow eliminate the partitions (called pruning by using your predicate or "WHERE" clause) then it will only full scan that partition. So if you had a table fit in 1000 blocks but the partition you needed to scan was only 20 blocks and it was able to prune down to that before scanning, then it only scans 20 blocks. There is a small overhead with partitionining but for what you get by reducing the amount of IO and work signifigantly, it more than pays off.
1 2 Previous Next