Forum Stats

  • 3,814,867 Users
  • 2,258,922 Discussions
  • 7,892,882 Comments

Discussions

Transaction execution time and block size

Laus
Laus Member Posts: 292
edited Jun 5, 2013 12:21PM in General Database Discussions
Hi,
I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
2K oracle block database had 3 datafiles, each 7GB.
4K oracle block database had 2 datafiles, each 10GB.
8K oracle block database had 1 datafile of 20GB.
Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
THX to all.

Answers

  • A potential problem is segments are scattered over multiple datafiles.
    An extent will never cross a datafile boundary.
    A read will never cross a datafile boundary.

    No AWR statistics are going to inform you about that.
    You'll need dba_extents.

    -----------
    Sybrand Bakker
    Senior Oracle DBA
  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,941 Blue Diamond
    >

    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up
    Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).
    Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).

    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.

    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.

    Regards
    Jonathan Lewis
  • JohnWatson
    JohnWatson Member Posts: 2,461 Gold Trophy
    The Oracle Uni 11g Performance Tuning course includes this statement:
    In recent years all the TPC performance tests have used an 8KB block size. Larger and smaller block sizes have not given significant performance benefits.
    The course was written by James Spiller, I don't know him so I can't ask him from where he got this information. It is not included in the 10g Performance Tuning course, so it must be him that inserted this little nugget.
  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,941 Blue Diamond
    JohnWatson wrote:
    The Oracle Uni 11g Performance Tuning course includes this statement:
    In recent years all the TPC performance tests have used an 8KB block size. Larger and smaller block sizes have not given significant performance benefits.
    The course was written by James Spiller, I don't know him so I can't ask him from where he got this information. It is not included in the 10g Performance Tuning course, so it must be him that inserted this little nugget.
    John,

    I'm all in favour of ordinary mortals like us using the default block size of the platform, but I thought Oracle would want to do anything it could to get even a tiny benefit - so I checked the TPC-C "full disclosure" documents from Oracle's most recent records and they used 4KB, 8KB and 16KB block sizes. 8KB seemed to be favoured for undo, with either 4KB or 8KB as the database defaults. At least one of the documents I checked was for a benchmark using 11.2.x.x

    Regards
    Jonathan Lewis
This discussion has been closed.