1. I am looking for some advice on storage tiering on a SAN for my oracle database. I currently have a SAN with all 15K SAS drives and I am looking to expand on the SAN and add a bank of SATA 7.2 K drives. I would like to locate our FRA including our oracle “disk to disk” backups and our archive logs to these 7.2 SATA drives and I was wondering what if any effect this would have on our production database?
2. Also I was also considering placing a development instance entirely on the 7.2K drives and was wondering what to expect for performance regarding this?
Oracle DB does not know or care about details involving disk speeds since it relies upon OS to complete any & all filesystem activity.
Oracle may not know the details ( does it? ), but it may care. If your slower drives can't archive logs fast enough because, well, they are slower, and you have other things going on there, in the worst case it could stall your db. Hopefully you would have enough redo logs to avoid that, but if everything is going through a single controller, you may still have a problem. Then again, if you system is cpu bound, it may make no difference at all. But if in being cpu-bound means it has to wait for the cpu to make i/o requests, it might. Hard to say without testing, eh? Anyways, with a SAN, number of spindles often makes more of a difference than speed, and on-board buffering, controller buffering, network issues and so forth all are potential bottlenecks or ameliorants. And it's the bottlenecks that make a difference.
>Oracle may not know the details ( does it? ), but it may care.
I was unaware the any Oracle DB is capable of caring.
Since SQL is the only way to obtain results from any RDBMS, please post SQL & results that demonstrate that "it may care".
If you are really interested to know whether 7.2k disk speed is good or bad, you need to find out what is the requirement for your db - read +write - IOPS and MBPS at any point in time and what is the capacity available for 7.2k disks.If IOPS/MBPS of your db requirement > available capacity, then there would always be problem
%Utilization = Requirement/Capacity << 100% - should avoid over allocation or saturation of disks.
"db file sequential read" is one of the key metric where Oracle complains whether storage is good or having some issue. "db file sequential read" > 20 ms indicates storage is really having some issue and requires further investigation.
Thanks for the link. And on page 30: "The weakest link determines the IO throughput."
Of course, this ballpark back of the envelope rough approximation can be blown apart by a couple of plan changes in the sql. And the sql is the most common reason for performance problems.
Speed of slower disks can make a difference, which difference can vary based on how close to saturation the I/O is. In a lightly loaded system, reports and such may simply take twice as long. As either cpu run queue lengths or i/o wait times increase, things can go south fast. If a development system is not a very close copy of a production system, it may be difficult to replicate performance problems. Of course any little thing can do that too. It's kind of rare in my experience to have a close hardware copy, YMMV.
Craig Shallahamer and Cary MIllsap have written some books you may be interested in if you haven't seen them. Find Cary's "Thinking clearly about performance" paper.