1 2 3 4 Previous Next 54 Replies Latest reply: Aug 18, 2010 5:46 PM by 767685 RSS

    Finding an optimal db_block_size

    767685
      Hi,

      I want to find an optimal DB_BLOCK_SIZE by running my tests against 4k,8k and 16k block sizes. I have studied one huge otn forum link regarding db_block_sizes but still I'm confused. I've not got much from that link.

      So, I want to conduct my own tests for my research about finding an optimal db block size. I've few questions which I want to be sure before running my tests. So that later when I put my results here for expert opinions, tests should be proper tests and that I note proper things regarding my testing.

      First of all is it ok to create 4k and 16k tablespaces in my 8k block size 11g (11.1.0.6.0) db or I've to create separate 4k and 16k databases. Does it matter or no difference whatever way is chosen?

      Secondly, in case if i will have to create separate databases of 4k and 16k, do I also need to set db_file_multiblock_read_count? or what, if I do not need to create separate dbs. something like this:

      2k block * 512 MBRC = 1MB
      4k block * 256 MBRC = 1MB
      8k block * 128 MBRC = 1MB
      16k block * 64 MBRC = 1MB

      to equalize the testing or Oracle itself will set value for db_file_multiblock_read_count.

      Currently, db_block_size=8192 and db_file_multiblock_read_count=64

      Thirdly, what things to consider for proper testing like ELAPSED TIME (that the query is faster in particular db_block_size) or anything else?

      Thanks alot.

      Regards
        • 1. Re: Finding an optimal db_block_size
          damorgan
          The optimal block size is always 8K with two possible exceptions:

          1. You are using RAC and have contention issues resolved by moving to a smaller block size.
          2. You are using 11g advanced compression and have CPU to spare.

          All the testing in the world will not overcome two very simple facts.

          1. Oracle builds and beta testers test 8K blocks
          2. Other block sizes may, and sometimes do, have bugs

          The performance difference in using any block size other than 8K is mostly hyperbole and/or transient.
          • 2. Re: Finding an optimal db_block_size
            Hans Forbrich
            John-M wrote:
            I've not got much from that link.
            I'm not surprised.

            My conclusion from that thread (and from my own tests, evaluations and studies) is that there are a few 'boundary' situations that might benefit from changing from the default block size. However, it is usually best to ensure other aspects (memory, SQL statements, indexing, materialized views, parallelism, and so on) are all managed properly first. Then, and only then, change block size IF tests show that there are still problems.

            Your tests may conclude that you have found a boundary condition (or two). If so, you need to decide whether that condition reflects your environment, or reflects it often enough to use a non-default block size.

            Is it worth it? IMO, probably not.

            But if you really want to pursue it, I'm sure there are plenty of people who are willing to spend precious time on the topic.
            • 3. Re: Finding an optimal db_block_size
              damorgan
              Or the OP could just pay attention to the experts that have been beating this one up for years and all come to the same conclusion.

              For example:
              http://www.freelists.org/post/oracle-l/block-size,7

              Re: Index blocksize

              http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:44779427427730#474704400346599870

              If Greg Rahn, Jonathan Lewis, Richard Foote, and Tom Kyte all agree on something ... and you feel otherwise ... you
              should give take a serious pause and reconsider your assumptions. You are not necessarily wrong. But the likelihood is
              approaching 1.0.

              By "you" I don't mean "you" personally of course.
              • 4. Re: Finding an optimal db_block_size
                Hans Forbrich
                John-M wrote:
                First of all is it ok to create 4k and 16k tablespaces in my 8k block size 11g (11.1.0.6.0) db or I've to create separate 4k and 16k databases. Does it matter or no difference whatever way is chosen?
                Please note that the default block size affects things other than the predominant data blocks. Do not forget the effect on undo, temp, system, sysaux, and so on.

                To do your tests properly, you will need to see both combinations - database built to each block size with a complete set of tests, and then within each of those use each of the non-database-default sizes for additional tablespaces.

                And when you do that, note the assumptions on your data sets, and in particular highlight the boundary conditions that you are meeting.

                To do the tests properly and truly come to conclusions, I see the need for 24 tables (very small, small, medium, large, very large and extremely large row count) x (many rows per block, medium number of rows per block, few rows per block, forced overflow) for each of the table types for each of the block sizes. Then I see the need to use various index combinations. Then I see the need for various concurrency load levels, probably around 10 different session groupings with different combinations of Select and DML. So somewhere around 400 tests per database block size, and another round of 400 for each non-DB block so you can study the impact of the non-DB block against the different DB Data dictionary/undo combinations.

                What this will provide is the set of boundary conditions which might benefit from varying from the Oracle-supplied default - but that will be valid only for the disk configurations on which you are testing. So best to make it a full harness, so it can be reproduced for various I/O controller and end disk combinations.

                Interesting challenge. Glad I'm not doing it ...

                Edited by: Hans Forbrich on Apr 20, 2010 4:58 PM

                Note that doing these tests without forcing multi-session concurrency (IOW, only looking at single session tests) is absolutely useless. At the very least you will need to ensure that you have (arbitrary) 5 sessions hitting any table concurrently.
                • 5. Re: Finding an optimal db_block_size
                  jgarry
                  >


                  >
                  Note that doing these tests without forcing multi-session concurrency (IOW, only looking at single session tests) is absolutely useless. At the very least you will need to ensure that you have (arbitrary) 5 sessions hitting any table concurrently.
                  5 is way too low. You want to chart ITL issues by blocksize! :-)
                  • 6. Re: Finding an optimal db_block_size
                    damorgan
                    I agree: 5 sessions is not nearly enough to have meaning.

                    If I were to do it, and I'm not interested either, I would either download a copy of Dominic Giles SwingBench (minimum) or use the Real Application Testing facility (most realistic).
                    • 7. Re: Finding an optimal db_block_size
                      Hans Forbrich
                      damorgan wrote:
                      I agree: 5 sessions is not nearly enough to have meaning.
                      Just giving an idea of the scope. To do the job properly, a whole lot more is required than the few things I put together as a skeleton. In fact, to do the job properly (exercise and understand all the aspects and implications of changing the blocksize) is likely a task significantly larger than the typical effort a vendor puts in to TPCs.

                      And, as Joel notes, I did not even hit the some of the more interesting requirements and things to evaluate. At the least it needs to look at OLTP, DW, and mixed loads.

                      SO perhaps by now, OP will start to understand that the open-ended question has huge test case implications, and it is time to narrow the focus significantly. At which time the argument devolves to the usual one - "it depends, and you should benchmark for your situation and your application". And short of that effort, the default (in this case) is good 'nuff.

                      :-)
                      • 8. Re: Finding an optimal db_block_size
                        damorgan
                        Which, combined with the bugs I noted earlier, is why no one I know uses anything other than 8K blocks.

                        The marginal performance improvement one might find would almost never justify the expense of fully examining the issue.
                        Unfortunately most people seem to think SET TIMING ON is a rigorous test environment.
                        • 9. Re: Finding an optimal db_block_size
                          767685
                          Hi,
                          Unfortunately most people seem to think SET TIMING ON is a rigorous test environment.
                          what do u mean by that please. I've seen many times that you only talk about 8k block sizes. Okay you might be 100% right but let me do my research while testing all the block sizes and let me conclude something :). I know can't conclude in a hurry, needs a lot of time. but atleast some encouragement by suggesting something

                          My question regarding Elapsed time is not answered :
                          Thirdly, what things to consider for proper testing like ELAPSED TIME (that the query is faster in particular db_block_size) or anything else?
                          or I check that elapsed time is faster for example in 8k block size, so it's the best?

                          Thanks

                          Regards
                          • 10. Re: Finding an optimal db_block_size
                            6363
                            John-M wrote:
                            Unfortunately most people seem to think SET TIMING ON is a rigorous test environment.
                            what do u mean by that please.
                            My question regarding Elapsed time is not answered :
                            You can see an almost 100% performance improvement on that basis simply by running the query again,
                            SQL> create table t as select o.* from all_objects o,
                              2    (select null from dual connect by level <= 5);
                            
                            Table created.
                            
                            Elapsed: 00:00:07.00
                            SQL> select count(*) from
                              2    (select owner, count(*) from t
                              3     group by owner);
                            
                              COUNT(*)
                            ----------
                                   295
                            
                            Elapsed: 00:00:00.34
                            SQL> /
                            
                              COUNT(*)
                            ----------
                                   295
                            
                            Elapsed: 00:00:00.18
                            I would be very surprised if changing block size could make that much of an improvement. Perhaps you should explain on what basis you believe changing the block size will improve performance instead of say changing the OS, changing the server, replacing the network cards, changing one of the hundreds of initialization parameters or any combination of all of these.

                            Or are you just changing something and measuring elapsed time because you can, instead of finding something that is slow, understanding why and then addressing the specific problem?
                            • 11. Re: Finding an optimal db_block_size
                              767685
                              I would be very surprised if changing block size could make that much of an improvement. Perhaps you should explain on what basis you believe changing the block size will improve performance instead of say changing the OS, changing the server, replacing the network cards, changing one of the hundreds of initialization parameters or any combination of all of these.
                              Or are you just changing something and measuring elapsed time because you can, instead of finding something that is slow, understanding why and then addressing the specific problem?
                              why you would be surprised, ofcourse changing block size can make much improvement. I've done some tests already but will post my results after doing them properly. Well I've interests in this topic, I can explain later when I'm done with this work that changing block size can improve performance or not.

                              Seeing the other things like u said server,OS etc could be a secondary issue. I'm testing everything on the same machine. H/w, OS everything will be same for each block size. So, I can see the difference.
                              You can see an almost 100% performance improvement on that basis simply by running the query again,
                              Yeah that I know. It's coming from the cache so obviously there will be 100% improvement. But what do u want to say by this thing...running the query again.....it's useless...... I always run only one time,second time it'll come from cache so there is no need to note down the elapsed time.

                              Even in my big table (800MB) tables, I delete each n every thing, shutdown/restart the db and restart my computer that there should be no doubt in my mind that data is coming from cache/ram etc during my second block size run.

                              Regards
                              • 12. Re: Finding an optimal db_block_size
                                767685
                                Hi,

                                Thanks a lot for your useful suggestions. I appreciate that.

                                Will you please answer my questions please:
                                What things to consider for proper testing like ELAPSED TIME (that the query is faster in particular db_block_size) or anything else?
                                in case if i will have to create separate databases of 4k and 16k, do I also need to set db_file_multiblock_read_count? or what, if I do not need to create separate dbs

                                what is a relation between these two parameters. Confusion about the effect of Db_file_multiblock_read_count on db_block_size. I mean reading no. of blocks per one read has effect on db_block_size?

                                Regards
                                • 13. Re: Finding an optimal db_block_size
                                  jgarry
                                  John-M wrote:
                                  I would be very surprised if changing block size could make that much of an improvement. Perhaps you should explain on what basis you believe changing the block size will improve performance instead of say changing the OS, changing the server, replacing the network cards, changing one of the hundreds of initialization parameters or any combination of all of these.
                                  Or are you just changing something and measuring elapsed time because you can, instead of finding something that is slow, understanding why and then addressing the specific problem?
                                  why you would be surprised, ofcourse changing block size can make much improvement. I've done some tests already but will post my results after doing them properly. Well I've interests in this topic, I can explain later when I'm done with this work that changing block size can improve performance or not.
                                  See this: http://richardfoote.wordpress.com/category/index-block-size/

                                  >
                                  Seeing the other things like u said server,OS etc could be a secondary issue. I'm testing everything on the same machine. H/w, OS everything will be same for each block size. So, I can see the difference.
                                  You can see an almost 100% performance improvement on that basis simply by running the query again,
                                  Yeah that I know. It's coming from the cache so obviously there will be 100% improvement. But what do u want to say by this thing...running the query again.....it's useless...... I always run only one time,second time it'll come from cache so there is no need to note down the elapsed time.
                                  Ohhhhh, this could be wrong on so many levels. If you just want a single user system reading everything from a big dumb disk, just use Access or something, you don't need Oracle.

                                  Modern systems have cache in several places, and Oracle is biased towards optimistic write and concurrency. To find an optimal block size, you need to define test conditions appropriate to your use under load. Dan and Hans have pointed out test harnesses and many other issues, not the least of which is configuration dependent bugs. Running things only once is like saying how fast your PC is by how long it takes to boot up. That's just irrelevant to how fast it is when you are running things.

                                  >
                                  Even in my big table (800MB) tables, I delete each n every thing, shutdown/restart the db and restart my computer that there should be no doubt in my mind that data is coming from cache/ram etc during my second block size run.
                                  800MB is not big. I've lost thumb drives bigger than that.
                                  • 14. Re: Finding an optimal db_block_size
                                    767685
                                    Well, I do not believe on running test or whatever only once and concluding the rest out of that.

                                    800MB table size - big in a sense that it is bigger than my RAM allocated at the db startup. I had reduced my RAM size allocated for my db to a minimum level it accepts.

                                    Thanks for the link.


                                    Regards
                                    1 2 3 4 Previous Next