This content has been marked as final. Show 10 replies
MPasha, in order to notice an increase, you need to have two results. There is only one result in your post.
Your post clearly states that ASM is causing an increase for the logfile sync wait. Did you do two exactly the same tests, with the redologfiles in ASM in one case, and the redologfiles not in ASM in the other case? That is the only way to prove ASM indeed is slower.
Please mind RAID5 is not the optimal solution for write intensive IO, which is exactly what is being done by oracle with the online redologfiles in a normal situation. (see http://www.baarf.com for information about RAID levels and performance)
Currently the same database is running in Production on single node, using the same DS8100 storage unit.
Previously we use to have logfile sync issue in the single node itself, then after investigation we have moved redo logs to seperate mount point and mounted then with cio option, which has resolved the problem.
Now we are planning to move this db to a 2-node rac for high availability, we are performing stress test on the cluster to see performance issues.
If someone can put light on this would be very helpful, in resolving logfile sync issue in rac.
the "cio" (concurrent IO) mountoption makes the IO calls bypass the operating system buffercache, just as "dio" (direct IO) does (cio is different for writes).
This prevents the operating system from doing "double buffering" (normal/buffered reads read disk contents into the operating system buffercache first, and after that is finished, the same data is read from the operating system buffercache and "given" to the requesting process), which (of course) costs more context switches/cpu power.
"cio" is more efficient for multiple writers to the same file (hence "concurrent IO") with respect to "dio", but in a normal configuration there is only a single process writing to the online redologfiles (the logwriter).
ALL the parity related RAID methods (3, 4, 5 for example) can have significant write impact. It's generally a bad idea to have the online redologfiles on such storage.
Do you know what you want to achieve? There always will be waits!
Hmmm....RAID5 is bad ....probably not!
Rather than blame RAID5, why dont you see where the waits are ; ie in the box/scsi driver or in the array. Service waits vs waits for service will tell you that. My feeling is that you're loading the server w/ too many random write IOs (ineffiecient commit processing).
I dealt with a similar issue recently, customer was running great for over year (on RAID5), then IO performance dropped, then everybody blamed RAID5. Heck, that hadnt changed. After over a week of triaging this thing, it turns out that a sysadmin plugged in too many HBAs in PCI-X bus, causing IOs wait in the host. Bottom line, dont beleive all the FUD, do whats best fits your application requirements.
Not sure how your DS8xxx is configured, but if you're doing normal redundnacy over parity redundancy, that may not be the best config. You'd get better performance deploying external redundancy w/ ASM disks from the two different DS8xxx. Or if your Storage admin will let you configure the two DS8xxx as semi-JBOD and then do normal redundancy (but I doubt it).
hope that helps!
put redo logs on raid5 is not the best method to gain performance.
raid5 is not the fastest way to write info ; this is indeniable
nevertheless , you are right : if you are running raid5 for a long time and your users find the aplication response time good ; and then, suddenly, response time becomes really poor, you need to investigate what has changed in the application or outside the raid5.
but you can not say raid5 is the fastest storage method for write intensive files (as redo logs ... )
I agree RAID5 is not the best for any write intensive system, but I wanted to folks to think openly about choices when deploying solutions.
You have three areas to deal w/: cost/reliability/performance...pick one..you cant have it all.
If your cache controller can keep up w/ the IO rate (specifically write rate); ie, the write destaging is not impacted, then you have several options ..one of which could be RAID5.