user620914 wrote:To answer that question with a question... How do you predict global weather accurately?
How can I do this reliably?
user620914 wrote:Excellent queston, with some already posted excellent answers.
I have certain queries and I want to test which one is faster, and how big is the difference.
How can I do this reliably?
P. Forstmann wrote:Yes.. as I/O resource utilisation can and will vary between physical and logical over time. Thus a variances in execution time, but a consistent amount of I/O (assuming things like a stable/consistent execution plan). A better baseline to use for performance evaluation than to deal with elapsed time as the unit of measurement.
When you say I/O, do you mean physical reads/writes and logical reads/writes ?
user620914 wrote:This is definitely possible. For example, a nested loop join returns the first row while Oracle is still working. For a hash join, Oracle can return the first row only when it has almost completed the job. That's why oracle has the FIRST_ROWS and ALL_ROWS hints. Also, a query with order by clause must be (nearly) fully processed before returning anything
Question: is it possible that given 2 queries A and B, both take 500s as a whole, but A is done so that most work is done in Oracle before returning the first row, while B only does small part of the work before returning 1st row, and continues to work while returning the remaining rows? So that with A time is divided say like 80% of total time spent before returning row 1, and with B 60% of time is spent before returning row 1, and 40% for the rest...
What kind of queries would cause this? Doesn't Oracle more or less do all the major work (like joining tables, sorting, etc) before returning the 1st row, and rest is just reading from some internal cache....or what actually happens?
Billy Verreynne wrote:I have to say I don't understand your point either :)
I do not really understand what your goal is with this and what you hope to gain by it.
If there was any kind on consistency ito SQL performance, then database products of today would have heavily utilised this for performance management and optimisation. But there isn't..
There's also the issue of scalability. Let's say we have application fooApp - which is well designed and written. A single client runs a copy of this app and has a resource utilisation of x on the database (ito memory, CPU, I/O, etc).
We deploy another 10 clients running fooApp. Does this mean that the resource footprint has now increased from x to +10x+ ? No. Oracle uses shared resources.. which is designed to serve multiple application sessions. The SQLs used by appFoo should be hard parsed once and stored once in the shared pool, whether there are a single fooApp client or a 100....
On the I/O soide, one client session stands a good chance of using faster logical I/O as another session has already paid the price for getting the data from phsyical disk and into the buffer cache. Etc.
Therefore attempting to somehow "+cost+" an application in terms execution time is pretty much flawed.. as the resource utilisation of the application does not scale linearly with the number of application sessions added. That is not how server systems scale - except perhaps Java application servers... ;-)
Thus my my lack of understanding for what you're hoping to achieve by seemingly wanting to "+cost+" your application SQL in Oracle...
user620914 wrote:No.. what I'm saying is that you need a baseline to make an informed decision about SQL performance.
I have to say I don't understand your point either :)
What are you saying, that people should not test their SQL performance? That tools such as autotrace are useless?
You can form your SQL queries better or worse, or select your table structure / indexes better or worse. Some choices may end up executing orders of magnitude slower than others. Obviously you can't get exact measurements "this query executes in 43123 ns" and there are a lot of unpredictable variables that affect the end performance. Still, it's often better to test your querie's / table's performance before implementing them in the application than not.Exactly. I'm not questioning the fact that optimising your code (and ALL your code, not just SQL) is a Good Thing (tm) - but how you go about that optimisation process.