This content has been marked as final. Show 24 replies
If you will not need any another indexes, and your table have at least 1MB, yes i recommend.
Are you asking whether it is appropriate to use IOT's in a database where the database block size is 16k? Or are you asking whether it is appropriate to create a separate 16k block size tablespace and a separate 16k block size buffer cache in a database whose block size is something other than 16k and to put IOT's in that tablespace?
What version of Oracle are you using?
How are you managing your SGA?
Yes, a separate tablespace that has a 16k block size. We are using Oracle 9i. The SGA isn't dynamic in 9i.
OK. What problem are you trying to solve and why do you believe a 16k block size tablespace might be the solution to that problem? Leaving aside the question of whether multiple block sizes ever improve performance for the moment, I believe it's fair to say that just about everyone agrees that it's not something to be trifled with. You want to at least have some reasonable evidence that it's going to help things.
Adding and managing multiple buffer caches is generally a pain, made worse when you eventually upgrade and dynamic SGA management doesn't work right. Unless you are spending a lot of DBA effort monitoring things, or you have an exceptionally good handle on your workload, you're very likely going to end up wasting RAM in one or the other pool that could be exceptionally useful to objects with the other block size.
There is a debate about creating multiple blocksizes.
Some claim that the increased human management components of watching multiple buffers is too cumbersome, and that the benefits are not great engough to warrant the new blocksize.
Me, I use scripts to manage the monitoring of multiple data buffers, so that I can have larger blocksizes, but you need to run a careful test to see if your specific database sees any improvements in throughput. In my experience, the results can range from negative (the new blocksize decreases throughput), up to over 20%, it depends on many many factors.
Here are my notes on this issue:
Hope this helps. . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference":
Hi,Or that there may indeed be no benefits in most environments
Some claim that the increased human management
components of watching multiple buffers is too
cumbersome, and that the benefits are not great
engough to warrant the new blocksize.
Me, I use scripts to manage the monitoring ofHi Don
multiple data buffers, so that I can have larger
blocksizes, but you need to run a careful test to see
if your specific database sees any improvements in
throughput. In my experience, the results can range
from negative (the new blocksize decreases
throughput), up to over 20%, it depends on many many
So you finally agree the results of multiple blocksizes can be negative in your experience ...
Why then you do still suggest in the soon to be infamous "Hands On Course Requirement" Thread: Hands On Course Requirement
"it remains great advice for senior DBAs" that
"One of the very first things the Oracle 9i database administrator should do is to migrate all of their Oracle indexes into a large blocksize tablespace. Indexes will always favor the largest supported blocksize"
when you now claim to have experience with seeing negative results ????
And why did you claim that "It's always worked for my clients" when you now say the results in your experience can be negative.
You can't have it both ways Don ...
Ok, your (RF & DB) last little spat was quite amusing and at times informative but lets not let this thread replicate the other Richard.
Why don't you just respond to the original question with your own (often impressive) technical input rather than trying to bait Don into another tit-for-tat diatribe.......
Multiple block sizes helps in fine tuning IO. Also for index management in better with a bigger block size.
True ! I wonder how many forum users (novices) have been confused and put off
little spat was quite amusing and at times informative
by the "war of words" that JL and RF have with DB.
create or replace trigger WORD_WAR After COMMENT of DB,RF,JL
on EVERY_THREAD for Each REPLY
for i in ∞ loop
output:= '#%*#$%#$*% %#%#$%#$%#$#$%#$%#$%#$';
end loop; --(Can't be)
and the output looks like this
If user in (OP,Novice) then
forget about this
user in ('JL','DB','RF') then
PS. Sure to get some fiery words from above GURUS.
I had resisted the temptation to respond, but Hermant makes a good point - I don't necessarily agree with the point, but it is a good point. I don't want to drag this thread too far off topic, but I think that it is headed in that direction anyway. I was an Oracle novice at one time, as I am sure most in this group will also admit for themselves. I recall several years ago, shortly after reading "Oracle Performance Tuning 101", of trying to improve database performance by reviewing the initialization parameters, redo log configuration, and general server configuration. One of the initialization parameters that I investigated was LOG_BUFFER, which at the time on one of the production databases was set to 262,144 bytes (256KB). A Google search found several pages with information related to this parameter, and one of the first pages found by Google showed wait events from a Statspack report, that according to the website, indicated a problem with the LOG_BUFFER parameter, which was configured at 512KB.
little spat was quite amusing and at times
True ! I wonder how many forum users (novices) have
been confused and put off
by the "war of words" that JL and RF have with DB.
The wait events included on the page from the Statspack report included "log file parallel write", "log file sync", "db file parallel write", and "control file parallel write". The notes that I recorded while reading "Oracle Performance Tuning 101" indicated that for Oracle 8i, 512KB is the default value for LOG_BUFFER, and that the value of that parameter should only be increased if there are wait events associated with the redo log buffer ("log buffer space" wait event). This caused a great deal of confusion for me. One book that brought me significant performance improvements indicated that "log buffer space" wait events indicated the potential for an undersized LOG_BUFFER, and one of the top hits in a Google search demonstrated that the existence of "log file parallel write", "log file sync", "db file parallel write", and "control file parallel write" indicated an undersized LOG_BUFFER.
So, I investigated the wait events found on the web page, using the "Oracle Performance Tuning 101" book pages 175-176.
log file parallel write: "Waits associated with writing of the redo records from the redo log buffer to disk. Usually indicates slow device(s) or contention where the online redo logs are located."
log file sync: "Waits associated with the flushing of redo log buffers on a user commit. If the waits are persistent, it may indicate device contention where the online redo logs are placed and/or slow devices.
db file parallel write (pg 37): "Indicates waits related to the DBWR process. May be related to the number of DBWR processes or DBWR I/O slaves configured. May also indicate slow or high contention devices."
control file parallel write: not found in the book, but later found that it might have to do with disk contention where the control files are located.
So, what did I determine was the cause of the wait events? Slow disks, or disks with write performance problems. It is so easy to become confused when supposedly reliable sources suggest entirely different causes for a particular performance problem.
Understanding how to push Oracle to function optimally is sometimes confusing. When someone tests a configuration for performance and finds that in one situation the configuration performs quickly, and in another configuration performs very slowly, it is necessary to look at what variables changed, and how those changes influenced the results so that repeatable results may be obtained. I am reminded of a term from grade school, called "the scientific method". As I recall, this method does not necessary need to apply to the pursuit to science, but may also apply to the pursuit of understanding. The process involves restricting the change to one controllable variable, and examining how that variable affects the outcome of an experiment. This implies that if the environment is the same, and only the one controllable variable is changed in exactly the same way, the same results will always be produced due to the change in the one controllable variable.
Various definitions for "the scientific method"
"Date: circa 1810 : principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses."
"It is important for your experiment to be a fair test. A 'fair test' occurs when you change only one factor (variable) and keep all other conditions the same."
"An experiment that demonstrates a hypothesis must be 'repeatable'. This means that anyone who performs the experiment correctly should get the same results."
When testing a hypothesis related to optimal functionality of Oracle, what happens if a test is not repeatable? What happens if the change in one variable is actually the change in many variables?
Without the extended discussion and feedback of the topics of hypothesis by Mr. Burleson, Jonathan Lewis, Richard Foote, Tom Kyte, and the other thousands of contributors on this forum, how will a novice user understand what hypotheses really work, and are reproducible in similar environments? Novices need to be able to understand the logic behind a hypothesis, and if no such logic exists, the novice needs to determine whether or not that hypothesis really works. That is part of the magic of forums like this - incorrect answers tend to attract the attention of people with corrected answers.
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
I know where you're coming from I do.
However, what you need to remember is that these threads are can be viewed by an Oracle novice next week, or next month or in the years to come. Wouldn't it be unfortunate if someone read all this nonsense regarding multiple blocksizes and only the one, incorrect and misleading point of view was discussed ?
Yes, I agree it's tiresome that the same old conflicting advice is repeated again and again. However, unless these conflicting and invalid advices are also questioned again and again, future Oracle novices will only continue to be mislead and confused.
The fact these questions and obvious contradictions remain unanswered will hopefully serve as a warning to those who stumble across these threads.
It would help the new DBA if each hypothesis was presented TO him instead of
how will a novice user understand what hypotheses really work, and are
reproducible in similar environments
being thrown at someone else entirely.
Appreciate the honest response. I was trying to point out that you presented your disagreement and not your argument, that is where the confusion will lie when these threads are read.
I consider myself only a mid level DBA, I like reading the comparitive advice so hopefully that will be displayed more consistently.