1 2 Previous Next 16 Replies Latest reply: Jun 2, 2014 2:56 AM by Dude! RSS



      In our quest to reduce operating costs we are consolidating databases and eliminating RAC in favor of standalone servers. This is a business decision that is a certainty.  Our SAN has been upgraded, and the new database servers are newer, faster, etc.

      Our database version is with Grid Infrastructure Our data diskgroup is RAID-5 and our fra is RAID-1+0.  ASM has external redundancy.  All disks are of equal size with equal storage performance and availability.

      Previously our databases were on separate clusters by function: OLTP, REPORTING and ENTERPRISE CONTENT MANAGEMENT. Development/Acceptance shared a cluster, while production was separate.

      The new architecture combines different functions onto one server for dev/acc, and another for production.  This means they will all be using the same ASM instance.  Typically we followed Oracle’s recommendation to have two disk groups, one for data and the other for FRA.  That followed well when the database was the only one using the data diskgroup.  Now that we are coming databases, is the best practice still to have one data diskgroup and one FRA diskgroup?  For example, production will house 3 databases.  OLTP is 500 GB, Reporting is 1.3 TB, and Enterprise Content Management is 6 TB and growing.

      My consideration is that if all 3 databases accessing the same data diskgroup, the smaller OLTP must traverse through the 6 TB of content management.  Or is this thinking flawed?

      Does this warrant separate diskgroups?  Are there pros and cons to this?

      Any insights are appreciated.

      Best Regards,



          What matters for performance and redundancy is not the amount of ASM disk groups, but how the disk groups are build. The same goes for redundancy. ASM disk groups are a logical construct. Performance and redundancy depends on the underlying physical aspects, such as storage controller and number of devices.



          What do you mean by "...must traverse through the 6 TB of content management"? I don't understand that.



          You have a different physical disk storage for your data and fra disk groups, but it depends on your SAN configuration. Different physical devices for data and fra disk groups are a good idea. Performance of RAID-5 however depends on the total number drives involved and can provide excellent performance for reading, but shows relative poor performance for writing and requires a long time to rebuild in case of a disk failure. I'm not sure RAID-5 is what you really want.


          If the space/cost ratio is important for you, I would rather suggest to create 8 devices on your SAN and then use them to create 2 ASM disk groups failure groups comprised of 4 devices each. Then build your DATA volume with those 2 disk failure groups and normal ASM redundancy. I think this will give you much better performance, recovery and space management. Keep in mind that ASM is not RAID and works different. Do the same for fra.


          Your fra should actually be at least twice the size of your data volume if you use it for backup, so I'm surprised you choose RAID-5 for data and RAID 10 for fra - if space is the concern it should rather be the other way around; but again, I'd rather use ASM redundancy, which is a lot more flexible than fixed RAID's, which you cannot modify without destroying the whole enchilada.


            Thank you for the reply, I appreciate it.


            We have good SAN storage underneath, the performance and redundancy of ASM we're comfortable with.  Our cost savings is going to be in paring down the number of Oracle server licenses, to accomplish that we must combine databases.  One small, one medium, one large all sharing the same ASM instance.  Content Management  is the large database.  So if I only have one data diskgroup and share it among the small, medium and large databases, am I hurting the small guy?  So the small databases's datafiles are dispersed throughout the now really big data diskgroup, is he hurt by that?  Would it be better to have three data diskgroups, one for each database? 


              I cannot see how this would matter since all access is going through one and the same ASM instance anyway. The same goes for disk partitions, for instance: If you mirror 4 partitions that reside on one and the same LUN the performance will be the same than accessing one partition. The size of a volume does not affect I/O performance. If you are creating separate disk groups you are eventually only going to waste disk space since one disk group cannot access the space of another disk group, and you may have to resize your disk groups later if one database demands more space than you anticipated.


                SherrieK wrote:


                We have good SAN storage underneath, the performance and redundancy of ASM we're comfortable with.  Our cost savings is going to be in paring down the number of Oracle server licenses, to accomplish that we must combine databases.

                There is a basic issue with multiple db instances on the same server - paying instance overheads ito server memory and system processes for each and every instance. 5 instances on the same server means 5 db writers, 5 log writers, etc. This DOES NOT scale.


                And for this exact reason, is why RAC exist - as you cannot scale by adding more database instances to a single server.


                At least not with 11g. 12c has an answer for this - a multitenant architecture.


                So if you do want to consolidate, and do this properly, where scalability and performance are part of the solution (and not lost because of the solution), consider using Oracle 12c's multitenancy - using a single container database where each of your other databases can be plugged into.


                  RAID 5 on modern SAN systems have very little additional write overheads - according to various technical and white papers from storage vendors.


                  It seems to me that the RAID 5 overheads are now async and not synchronous, to put it in basic terms. The write offloads to the storage server - immediate response to I/O caller. Parity calculations and writes are then done to actual media from the storage cache during/after that response.

                  • 7. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                    Well that's also part of the advertising. I have a storage controller in my computer (Apple RAID card) which I read somewhere has a special chip to optimize RAID 5. I don't know exactly how this is accomplished, but the battery support seems very important. However RAID 5 is still RAID 5 and to fully recover a device failure takes much longer than other RAID systems. But whatever, I don't know about the OP's SAN. I just think it is a bit unusual to use RAID 10 for fra and RAID 5 for data. If I can affort RAID 10 for fra, why would I use RAID 5 for data?


                    Anyway, I think saving on hardware is generally not as compelling anymore then it use to be many years ago, especially for storage and memory, or hardware in general. And certainly if computer equipment has already been amortized or written off. If you want to save on energy and physical space, that's a different question. If you want to spend triple the money for a 10 % performance gain or spent lots of money for EMC or Cisco equipment and support, that's also another question.


                    Based on my experience the whole idea of computer and system consolidation is usually suspicious or questionable. I don't know or remember any time when consolidation being the primary reason actually provided the anticipated numbers, often having quite the opposite affect on the longer run or greater picture. Unfortunately, IT has often it's own dynamics, which are not necessarily in line with what a company's business or staff really requires.

                    • 8. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                      There is a significant cost factor when using a high-end SAN when it comes down to $ cost per GB storage and RAID5 versus RAID10. 100's of 100,000's of $ as I understand.


                      And in today's global economic climate, costs are everything.


                      So in some (many?) cases, there is not much of a choice - due to the lower costs of RAID5 combined with white papers from the SAN vendor that basically states that performance does not degrade. How do you counter that when dealing with management? Complex technical articles? 3rd party comments that RAID5 is bad?


                      RAID5 for databases today is a reality. Like it or not. (personally I don't - but personal opinion does not substitute for evidence in the boardroom when deciding on serious expenditure)

                      • 9. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                        Well, I would always question a high-end SAN solution as such for that reason. Unfortunately many decision makers in IT don't do enough research and are too much biased by market share since they don't really have a clue. And many don't really understand the technology and buy expensive licensing for features that are useless, or don't realize the support requirements.


                        I have some experience with EMC, which all in all cost nearly a million dollar after several years in the end, and I can only say that it provided more trouble than it was worth. Unfortunately once such systems were purchased, departments were forced to use them to make the investment worthwhile, causing conflicts, central failures, more outage and more maintenance in the end.


                        I can image that there are good uses for such systems. However, in my opinion and experience, there are plenty of alternatives available today that are much cheaper and provide the same scalability, performance and reliability, at least for most common cases and uses. I certainly don't need to spend a hundred thousand bucks for 20 TB of redundant disk space and 2 Gbit/s performance.

                        • 10. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                          Agree with your sentiments.


                          These decisions are often based on other reasons than pure technical ones.

                          • 11. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                            I have many issues to deal with in this 'consolidation', but budget reduction is happening in state and regional government.  Our SAN storage is for our enterprise infrastructure and not part of my money-savings directive.  We are also migrating to UCS blades for the infrastructure, also not part of my budget reduction contribution. Oracle licensing is our biggest software cost, this is where my directive lies.  We've always been conservative and done more with less, now we will do with less, but different because the storage and hardware are awesome. 


                            We've been consolidating databases onto RAC clusters and standalones since we started doing Oracle.  For the last 7 years we've supported ASM, 6 databases and 2 passive standby instances (with Data Guard) on a 2-node cluster totalling 64gb of memory.  The new UCS blades have 256gb of memory.  I get that each database must support its background processes.  If I add up the sga, pga allocated, background processes they take up about 130gb of memory, but also consider that there is an overhead to RAC.  In all the years we've had Oracle, most of our failures, outages or downtime was because of RAC.  On the plus side of that, the seamless failover saved us most times (not all times), but required administrative time for troubleshooting.


                            I would love to go the Oracle 12c and use its multitenant architecture, but I have 3rd party applications that don't yet support it.  11.2 might be our last release unless I can reduce costs.  Consolidation is real and much needed, I believe why Oracle responded to the market with multitenancy. 


                            But back to my first question about how many diskgroups to service a group of databases.  What I hearing, and think I agree to, is that one data group will suffice because the ASM instance knows where to retrieve the data and waste will be reduced, as well as management. 


                            I still need to do some ciphering and by no means have a final plan, but thank you all for your insights and contributions.

                            • 12. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                              From what I understand, beside availability, RAC makes sense if you cannot get a single system that is big enough to provide the capacity and power you need. It all depends on your requirements.


                              Regarding your ASM question, the following from the Oracle documentation might be useful:

                              Configuring Storage Oracle ASM Strategic Best Practices

                              • 13. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                                Absolutely, RAC is all about high availability, scalability, and when combined with Data Guard, a maximum availability architecture (MAA).  Total Cost of Ownership aligned with business requirements, and finally, what we can live with, was our driving decision to move away from RAC.  We will keep our standby dataguard databases, but they are passive and only used in a disaster event.


                                Things are cyclic, you know how that goes?  We were standalone, then needed to become MAA causing us to become more complex, now we've come full circle and moving back to standalone.


                                I think Oracle is a solid product and has done us well for many years, now I've got to make it cost effective enough to make it a continued option for us. 


                                Thanks for your insights.

                                • 14. Re: ASM BEST PRACTICES FOR ‘DATA’ DISKGROUP(S)

                                  RAID5 for databases today is a reality. Like it or not. (personally I don't - but personal opinion does not substitute for evidence in the boardroom when deciding on serious expenditure)


                                  I understand, I guess, but I wonder... who invents such phrases and why do people accept it? If personal opinion does not matter anymore, and I mean the opinion of experts - who else does - what does and why?


                                  Anyway, I've done a bit more reading on what's new about RAID 5 performance. And yes, not that it was really surprising, some has changed about RAID 5 performance. In addition to special ASIC chips, which are designed to perform application specific tasks, apparently the way some systems overcome the write penalty is by using a huge cache. So systems can write the data and continue, while the parity is calculated from the data in the cache asynchronously.


                                  Well, I guess one has to hope that such storage systems never crashes or screws up. If the sun shines and everything is nice and dandy it should be fine, but I'm not convinced it is a very good solution for critical data when the SHTF. My suggestions if RAID 5 needs to be done, I would try to use it for data which is mostly read only or has low access, and put everything that is busy, and critical, such as control files, redo, etc. on RAID 10, or use ASM redundancy.


                                  Although I'm always suspicious of ASM because it adds another layer of software complexity. Understanding and managing ASM can be problem to recover from failure when time is of the essence, especially if you don't deal with it on a daily basis and forget how things work after a couple of month.

                                  1 2 Previous Next