This discussion is archived
1 2 3 4 Previous Next 54 Replies Latest reply: Aug 18, 2010 3:46 PM by 767685 Go to original post RSS
  • 45. Re: Finding an optimal db_block_size
    hans forbrich Oracle ACE Director
    Currently Being Moderated
    Hemant K Chitale wrote:
    So, "you want to use a larger block size, ensure that you have tested your *production* environment [not a limited simulation] and validated the results and are aware of the risks".
    We totally agree. (Except that I might ask for 'non-default' rather than 'larger'.)

    That is the whole basis of my entries earlier in this discussion: unless one has a comprehensive test suite that will exercise nearly every variant - which I believe is very difficult, if not impossible to achieve - the only meaningful answers can occur in context of a specific application.

    Since the mid-80s I've heard "it depends" so often in context of Oracle, I've come to believe it. :-)
  • 46. Re: Finding an optimal db_block_size
    767685 Newbie
    Currently Being Moderated
    Hans Forbrich wrote:
    To do the tests properly and truly come to conclusions, I see the need for 24 tables (very small, small, medium, large, very large and extremely large row count) x (many rows per block, medium number of rows per block, few rows per block, forced overflow) for each of the table types for each of the block sizes.
    Very small = 1 block
    Small = less than 1 multiblock read count
    Medium = about 10 multi-block read count
    Large = ~ 50 multi-block read count
    Very large = ~ 100-250 multi-block read count
    Extremely large = 3-5 Luns
    Few rows per block around 750-1K bytes/row ... 3-5 rows / 4K block
    Medium rows would be typical customer address tables - around 150 bytes per row ... 30 rows / 4K block
    Many rows/block would approximate code lookup tables - key + description or about 50 bytes per row ... 80-100 rows for a 4K block
    Forced overflow might be 6K/row and other exercises might just add a LOB and test against the LOB
    Hi Hans,

    Few little doubts:

    will this setting also be same in all these 4 combination's in 8k and 16k block sizes, as u said FEW ROWS ... 3-5 rows/block ?

    Secondly, do I have to change multiblock read count every time e.g as u said:
    Small = less than 1 multiblock read count
    If I have to set mbrc then what no. will be there for less than one multiblock read count?

    Thirdly, I didn't understand this please:
    Extremely large = 3-5 Luns ( how can I create table with LUn method)
    Thank you.

    Regards..

    Edited by: John-M on May 1, 2010 4:46 AM
  • 47. Re: Finding an optimal db_block_size
    damorgan Oracle ACE Director
    Currently Being Moderated
    Consider what the real world looks like for an Oracle production database.

    The application is deployed on specific hardware and with specific initialization values. Then as application usage patterns change, someone adds a new interface, a new report, batch data must be loaded from some upstream system, etc. things change. We also find in the real world that as more data is added or data changes over time so do cardinalities and how the optimizer sees what it needs to do. Were this not true we would have no need for tuning experts like Jonathan Lewis, Cary Millsap, Tanel Poder, Alex Gorbachev, etc. Everything would be set and forget.

    So what needs to be tested is the realities that any deployed application will face. Will the DBA ever change the multiblock read count? How about SGA size? Sort area size? Optimizer cost adjustment? Resize undo and temp tablespaces? Change their retention times? Add more RAM to the machine?

    You can not test every variation of every possible configuration change. But you can look at a 3-5 year history of what has been done, or should have been done, and make sure your testing reflects reality.

    At issue for me, and Hans, and so many others is not whether we can test block size changes. Rather it is whether we can test block size changes sufficiently for our tests to have any real-world value.
  • 48. Re: Finding an optimal db_block_size
    767685 Newbie
    Currently Being Moderated
    Hi,

    I have created 3 tables of equal size (2048MB) in my 11g(11.1.0.6.0 - Production), db_block_size=8192. OS=XP. RAM=1gb and Process is 1.60 GHz. Two tables with non-default block size of 4k and 16k.
    SQL> create table bt8k
      2  (x char(600));
    
    Table created.
    
    SQL> select table_name,tablespace_name from user_tables
      2  where table_name = 'BT8K';
    
    TABLE_NAME                     TABLESPACE_NAME
    ------------------------------ ------------------------------
    BT8K                           USERS
    
    SQL> BEGIN
      2        FOR v_LoopCounter IN 1..2790000 LOOP
      3        INSERT INTO BT8K (x)
      4           VALUES (v_LoopCounter);
      5          END LOOP;
      6    END;
      7  /
    
    PL/SQL procedure successfully completed.
    
    SQL>  select sum(bytes)/1024/1024 "SIZE IN MB" from dba_segments
      2   where segment_name = 'BT8K';
    
    SIZE IN MB
    ----------
          2048
    
    SQL> analyze table bt8k compute statistics;
    
    Table analyzed.
    
    SQL>    select NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
      2     from user_tables
      3     where table_name = 'BT8K';
    
      NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
    ---------- ---------- ------------ ---------- ---------- -----------
       2790000     261404          740       1385          0         606
    
    
    SQL> set linesize 1024
    SQL> set arraysize 500
    SQL> set pagesize 0
    SQL> SET TIMING ON
    SQL> ALTER SESSION SET TIMED_STATISTICS = TRUE;
    
    Session altered.
    
    Elapsed: 00:00:00.04
    SQL> ALTER SESSION SET SQL_TRACE = TRUE;
    
    Session altered.
    
    Elapsed: 00:00:00.10
    SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = 'BT8k' ;
    
    Session altered.
    
    Elapsed: 00:00:00.08
    SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
    
    Session altered.
    
    Elapsed: 00:00:00.16
    
    SQL> select * from bt8k;
    - - - - - - - 
    
    2790000 rows selected.
    Elapsed: 00:14:02.99
    
    SQL>Alter session set SQL_Trace = false ;
    Session altered.
    SQL>ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
    Session altered.
    SQL>ALTER SYSTEM FLUSH BUFFER_CACHE;
    Session altered.
    Created same single column tables in 4k and 8k blocksize under diferent users as below:
    SQL> create user testbt4k
      2  identified by test
      3  default tablespace test4kts;
    
    User created.
    
    SQL> grant dba to testbt4k;
    
    Grant succeeded.
    
    SQL> connect testbt4k
    Enter password:
    Connected.
    SQL> create table bt4k
      2  ( x char(600));
    
    Table created.
    
    SQL>
    SQL> select table_name,tablespace_name from user_tables
      2  where table_name = 'BT4K';
    
    TABLE_NAME                     TABLESPACE_NAME
    ------------------------------ ------------------------------
    BT4K                           TEST4KTS
    
    Inserted 2540000 rows in Bt4k table.
    
    SQL> analyze table bt4k compute statistics;
    
    Table analyzed.
    
    SQL>  select NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
      2     from user_tables
      3     where table_name = 'BT4K';
    
    
      NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
    ---------- ---------- ------------ ---------- ---------- -----------
       2540000     522808         1480        935          0         606
    
    Elapsed: 00:00:00.05
    
    SQL> select sum(bytes)/1024/1024 "SIZE IN MB" from dba_segments
      2  where segment_name = 'BT4K';
    
    SIZE IN MB
    ----------
           2048
    
    Elapsed: 00:00:00.50
    
    SQL> select * from bt4k;
    
    - - - - - - - - -
    2540000 rows selected.
    
    Elapsed: 00:12:42.66
    
    Enabled tracing before the query and disabled afterwards as above.
    created user testbt16k in test16kts tablespace as below:
    SQL> connect testbt16k
    Enter password:
    Connected.
    SQL> create table bt16k
      2  (x char(600));
    
    Table created.
    
    Elapsed: 00:00:00.30
    SQL> select table_name,tablespace_name from user_tables
      2  where table_name = 'BT16K';
    
    TABLE_NAME                     TABLESPACE_NAME
    ------------------------------ ------------------------------
    BT16K                          TEST16KTS
    
    Elapsed: 00:00:00.02
    SQL> BEGIN
      2        FOR v_LoopCounter IN 1..3042000 LOOP
      3        INSERT INTO BT16K (x)
      4           VALUES (v_LoopCounter);
      5          END LOOP;
      6    END;
      7  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:18:50.10
    
    SQL> analyze table bt16k compute statistics;
    
    Table analyzed.
    
    Elapsed: 00:16:41.12
    SQL> select sum(bytes)/1024/1024 "SIZE IN MB" from dba_segments
      2  where segment_name = 'BT16K';
    
    SIZE IN MB
    ----------
          2048
    
    Elapsed: 00:00:00.32
    SQL>
    SQL>    select NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
      2     from user_tables
      3     where table_name = 'BT16K';
    
      NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
    ---------- ---------- ------------ ---------- ---------- -----------
       3042000     130700          372       1683          0         606
    
    Elapsed: 00:00:00.31
    
    SQL> select * from bt16k;
    
    - - - - - - - - - 
    3042000 rows selected.
    
    Elapsed: 00:15:50.24
    
    Enabled tracing before the query and disabled afterwards as above.
    Generated TKPROF from the above trace files. Below are the results of TKPROF but only the concerned part is pasted. If needed I will also put the whole TKPROF text file.

    Bt4k table TKPROF pasted result
    ********************************************************************************
    
    SQL ID : csq036sxxdt9q
    select * 
    from
     bt4k
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.06          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     5081     16.38      29.25     508252     513338          0     2540000
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total     5083     16.38      29.32     508252     513338          0     2540000
    
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 120  
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    2540000  TABLE ACCESS FULL BT4K (cr=513338 pr=508252 pw=508252 time=56809 us cost=114520 size=1524000000 card=2540000)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                    5081        0.00          0.03
      db file sequential read                         4        0.01          0.06
      direct path read                             4016        0.15          5.70
      SQL*Net message from client                  5081      281.70       1013.26
      SQL*Net more data to client                187959        0.07         10.62
      resmgr:cpu quantum                             12        0.16          0.42
      db file scattered read                          1        0.01          0.01
    ********************************************************************************
    Bt8k table TKPROF pasted result
    ********************************************************************************
    
    SQL ID : 0k3y6pmk2jvxu
    select * 
    from
     bt8k
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.01       0.16          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     5581     20.15      35.60     253757     258838          0     2790000
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total     5583     20.16      35.77     253757     258838          0     2790000
    
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 121  (TESTBT8K)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    2790000  TABLE ACCESS FULL BT8K (cr=258838 pr=253757 pw=253757 time=73901 us cost=70902 size=1674000000 card=2790000)
    
    
    Rows     Execution Plan
    -------  ---------------------------------------------------
          0  SELECT STATEMENT   MODE: ALL_ROWS
    2790000   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'BT8K' (TABLE)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                    5581        0.00          0.04
      reliable message                                1        0.00          0.00
      enq: KO - fast object checkpoint                1        0.00          0.00
      direct path read                             3998        0.16          5.28
      SQL*Net message from client                  5581      178.27        983.45
      SQL*Net more data to client                206459        0.10         13.72
      resmgr:cpu quantum                             17        0.15          0.52
      db file sequential read                         1        0.02          0.02
      db file scattered read                          1        0.01          0.01
    ********************************************************************************
    Bt16k table TKPROF pasted result
    ********************************************************************************
    
    SQL ID : awhwjxkznf48x
    select * 
    from
     bt16k
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     6085     16.74      29.18     126869     132961          0     3042000
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total     6087     16.74      29.18     126869     132961          0     3042000
    
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 122  (TESTBT16K)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    3042000  TABLE ACCESS FULL BT16K (cr=132961 pr=126869 pw=126869 time=726404 us cost=49092 size=1825200000 card=3042000)
    
    
    Rows     Execution Plan
    -------  ---------------------------------------------------
          0  SELECT STATEMENT   MODE: ALL_ROWS
    3042000   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'BT16K' (TABLE)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                    6085        0.00          0.03
      reliable message                                1        0.00          0.00
      enq: KO - fast object checkpoint                1        0.01          0.01
      direct path read                             3938        0.14          5.25
      SQL*Net message from client                  6085      761.31       1681.05
      SQL*Net more data to client                225107        0.17         10.92
      resmgr:cpu quantum                              4        0.06          0.10
      db file sequential read                         1        0.02          0.02
      db file scattered read                          1        0.03          0.03
    ********************************************************************************
    One main thing regarding my concern is that Elapsed time for Bt4k in Sql*Plus is 00:12:42.66 and it's 29.32 in TKPROF
    
    Elapsed time for Bt8k in Sql*Plus is 00:14:02.99 and its 35.77 in TKPROF.
    
    Elapsed time for Bt16k in Sql*Plus is '00:15:50.24 and its 29.18 in TKPROF.
    
    Why so much of Elapsed Time difference in Sql*Plus and TKPROF? 
    Run the same test second time, it has little differences in BT4k for example but that are also there for BT8k and BT16k respectively.

    After every test I shutdown the database, restarted it, shutdown my computer and restarted it again before the next test.

    I have also run some different tests, this is one of them for experts.

    Thanks alot.

    Best Regards,
  • 49. Re: Finding an optimal db_block_size
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    John-M wrote:
    One main thing regarding my concern is that Elapsed time for Bt4k in Sql*Plus is 00:12:42.66 and it's 29.32 in TKPROF

    Elapsed time for Bt8k in Sql*Plus is 00:14:02.99 and its 35.77 in TKPROF.

    Elapsed time for Bt16k in Sql*Plus is '00:15:50.24 and its 29.18 in TKPROF.

    Why so much of Elapsed Time difference in Sql*Plus and TKPROF?
    The time reported for the query by tkprof is the time taken between a call coming into the database and a reply being sent to the client. If you check the wait times you will see that "SQL*Net message from client" is the biggest component - this time does not get summed into the statement execution time under the "elapsed" column. If you do the arithmetic then, with some rounding errors you will probably find that total elapsed time for query is close to SQL*Net time plus query execution time.

    The observation that "larger blocks take more time" is probably closely related to the fact that you've packed more rows into the larger blocks, so it takes more time to pass the rows to the client. You'll note, by the way, that you are doing serial direct path reads of about 0.5MB for the tablescans. This cuts out quite a lot of the CPU overhead of block handling that you would get for single block reads.


    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}
    {noformat} (lowercase, curly brackets, no spaces) so that the text appears in 
    fixed format
    .
    
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    
    +"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
    Isaac Asimov                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
  • 50. Re: Finding an optimal db_block_size
    767685 Newbie
    Currently Being Moderated
    Jonathan Lewis wrote;
    The observation that "larger blocks take more time" is probably closely related to the fact that you've packed more rows into the larger blocks, so it takes more time to pass the rows to the client. You'll note, by the way, that you are doing serial direct path reads of about 0.5MB for the tablescans. This cuts out quite a lot of the CPU overhead of block handling that you would get for single block reads.
    Yes, there are more no. of rows in the 16k block size table but tables sizes are same i.e. 2048MB as shown by sum(bytes) from dba_segments.

    Is it ok to consider Elapsed time of Sql*Plus after the query has finished running? and look for other things in TKPROF. I think you have considered Elapsed time of Sql*plus that's why you are saying that "larger blocks take more time"

    Also can you please explain me a little how you have calculated that direct path reads of about 0.5MB.

    Though I have also completed other test cases e.g. scale factor 1 of TPC-H schema. created 1gb db and run 22 queries of DSS against the 4k, 8k and 16k blocksizes but that is next to post the results.

    What is the conclusion of this test case or any suggestion(s) regarding this test case please?

    Thank you.

    Regards..
  • 51. Re: Finding an optimal db_block_size
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    John-M wrote:
    Jonathan Lewis wrote;
    Also can you please explain me a little how you have calculated that direct path reads of about 0.5MB.
    Your table is always 2GB, the number of direct path read waits is always very close to 4,000 regardless of the block size, so the direct path reads are always 0.5MB. This test isn't measuring anything to do with the block size.

    >
    What is the conclusion of this test case or any suggestion(s) regarding this test case please?
    You need to decide what you're testing. All this test tells you is the rate at which you can move data across the network from your server and present it at your client.

    Regards
    Jonathan Lewis
  • 52. Re: Finding an optimal db_block_size
    776855 Newbie
    Currently Being Moderated
    Could you please illustrate this with the numbers from the test? I don't really see how these numbers add up according to your explaination.
  • 53. Re: Finding an optimal db_block_size
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    user1760063 wrote:
    Could you please illustrate this with the numbers from the test? I don't really see how these numbers add up according to your explaination.
    In the first part of the output John-M explains how he has created the tables so that they are sized at 2GB.

    In the second part of the output John-M lists the tkprof output for each query which shows 'direct path read' waits, show 4016, 3998, and 3938 read requests in turn. From this we can derive the fact that the average direct path read is close to 0.5MB.

    In the second part of the output we also see the large numbers and times for "SQL*Net message from client" which correspond to most of the time reported for the query. Note how the number of messages is a close match for the number of fetches; you can reasonably assume that the max time for the messages corresponds to a wait that happened after the last fetch and before the next activity, so you can subtract it from the total.

    Regards
    Jonathan Lewis
  • 54. Re: Finding an optimal db_block_size
    767685 Newbie
    Currently Being Moderated
    Hi,

    Sir please have a look at this thread and comment regarding the testing.

    single col v/s multi col table elapsed time comparison


    Thanks a lot.


    Regards
1 2 3 4 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points