This discussion is archived
12 Replies Latest reply: Jan 14, 2013 10:24 PM by BillyVerreynne RSS

NFS vs ASM as storage - which is better. considerations.

870623 Newbie
Currently Being Moderated
some months ago i planned to deploy two RAC installations only for test purposes - just to play with them. i assumed that both of them will be different with theirs architecture.

first one:
--------------
OEL5.5
11.2.0.3
2 nodes
externall storage based on ISCSI + ASM and ASMlib
separate oracle homes on both nodes

second one:
--------------
OEL4.5
10.2.0.1
2 nodes
nfs from third server as external storage
common oracle home stored on shared storage

and to tell you the true i supposed that the second installation will be much more difficult to deploy that first one. but i was wrong.
NFS as shared storage is much more easy to deploy and maintanance - and this is my assumptions after physical deployment of this two RAC's.

lets not talk about redudundancy and performance between this two installations, but lets take under consideration its easy deployment and easy maintanance.

in my opinion using NFS as shared storage is much more easy and not as effort demainding than deploying RAC on ASM. it doesn't require ASM installation, files are stored on separate NFS share where we can see them from the operating system level and even installation is much more simpler.
what do you think about that?

how many RAC installation is deployed using ASM+ISCSI/FC comparing to NFS as external storage.
is it allowable considering RAC installation with NFS as external storage as something less profesional than storing datafiles on ASM in DiskGroups?

what do you think about cons and pros comparing this thwo architectures?
  • 1. Re: NFS vs ASM as storage - which is better. considerations.
    damorgan Oracle ACE Director
    Currently Being Moderated
    My preference in most situations is NFS. It is far simpler, has acceptable performance, excellent stability, and is substantially lower in cost to implement.

    Too many people get directed by sales forces and internal UNIX admins to using the corporate SAN. Few realize any actual benefit.
  • 2. Re: NFS vs ASM as storage - which is better. considerations.
    Svetoslav Gyurov Explorer
    Currently Being Moderated
    Hi,

    Personally I prefer ASM over SAN. All of the companies we work with already have existing SAN infrastructure and middle-range to high-end storage systems. Adding new hosts is painless, the configuration is easy and secure. Using IP protocol you could lose packages (bad), FC is lossless. Also FC protocol has lower latency than IP protocol. Benefits of ASM are a lot, but with ASM one could add/drop disks at any time online, we've also migrated the whole databases online from one storage to another using ASM.



    Regards,
    Sve
  • 3. Re: NFS vs ASM as storage - which is better. considerations.
    onedbguru Pro
    Currently Being Moderated
    I prefer ASM on SAN (more often than not using EXTERNAL REDUNDANCY - relying on the SAN/RAID to keep the spindles spinning). I have had way to many NFS servers have latency and disconnect issues - even on dedicated NFS networks. I think ASM/RAC on NFS is an oxymoron. (High availability?) I worked for the company when we did the storage vendor-> new storage vendor on the first VVLDB (250TB) using ASM. Oracle said it was "theoretically" possible, but no one had ever attempted it on that scale. It only took 27 days.
  • 4. Re: NFS vs ASM as storage - which is better. considerations.
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    piotrtal wrote:

    lets not talk about redudundancy and performance between this two installations, but lets take under consideration its easy deployment and easy maintanance.
    How can you not? Why does it not make sense to consider performance and redundancy?

    Architecture wise, NFS has an inherent problem. It uses Ethernet as I/O fabric layer. Fact: Ethernet is a poor choice for storage protocol.

    What do you have between storage server and database server? 1Gb Ethernet? Shared? Come on!! Pathetic is the word that comes to my mind...

    Fibre channels are dual. Even 8+ years ago with old FC technology, that would have been dual 2Gb channels. Today it is dual 4Gb or 8Gb.

    The exact same type of NFS storage architecture can be done use SCST. It runs over Infiniband. Old Infiniband is 10Gb. Current QDR (Quad Data Rate) Infiniband is 40Gb. This means you share your storage's drives as raw devices using the very fast SRP - Scsi RDMA (Remote Direct Memory Access) Protocol. Over dual 40Gb/s channels - as HCA cards also typically have 2 ports. (btw, Ethernet sucks so much at storage protocols, the SRP implementation on Ethernet has been announced in order to try and address the issues that IP was never designed for)

    Setting up SCST is as "easy" to setup as NFS.

    As for ease of deployment using SANs and FC - I have several clusters using SANs as cluster storage. I fail to see NFS as easier or less complex in this regard. The SAN GUI tool is used for LUN masking and zoning. Multipath is configured on the database server to provide LUN name consistency. It is not complex.

    ASM is a storage manager. Not the storage system itself. And ASM is IMO mandatory for Oracle RAC as it provides a set of critical features that you do not get using a cooked cluster file system as shared storage, or another 3rd party storage manager (like Veritas or whatever).

    Based on the last 7 years actively using RAC and building a number of production RACs. I would not want to build and use a NFS based RAC. I would not recommend a NFS based RAC.
  • 5. Re: NFS vs ASM as storage - which is better. considerations.
    870623 Newbie
    Currently Being Moderated
    thank you very much for all for yours answers - they are very valuable to me.
    but as i noticed there was one person who acknowleded NFS as cheaper/easier/not as much performance worse comparing to ASM. :)

    this disscusion is only for understand some facts about NFS/OCFS vs ASM by me, so don't take it personally - this is not attack on anyone as a person.
    i just need to understand some facts which messed in my mind after i installed some RAC's to play with them. so everyone, please straighteen my imporper reasoning and don't be angry on me.

    i didn't want to talk about redundancy and performance because

    - we can always have external redundancy on physical storage device which should be easier to maintain and performance better. i could agree with sentence "ASM redudancy is better than external" only in one case: if company doesn't posess storage administrator but only DBA.

    about NFS vs ASM performance i won't discuss, because i don't possess data about it's difference with performance. nfs is probably worse but main role in this plays IP protocol like one of my precedesor said. but what if company doesn't poses FC infrastructure but only IP (x-Ethernet)?

    i could only presume that ASM is better but question is: "it is so much performance better that it is worth to install ASM instance on each node to maintain disgroups, and diminish capacity of each node at the same time?" each of ASM instance utilizes ressources (memory/cpu) on node and it could be considered as disadvantage.

    you told about many features which ASM gives over NFS/OCFS, and i would agree only with one - maybe performance. redundancy and stripping we can do on physical level.
    if i have datafiles stored on OS level instead ASM i would consider this as advantage, because (its my opinion) it easier to maintain files when we have them on OS level.

    even if we consider extensive remote RAC installation (one node is far /distance/ from the other) and we have read-preffered storage parameter set for each ASM instance i wouldn't agree. the same we can obtain on harware level. EMC storages could also replicate between sites, and i don't believe that this hardware replication is worse than software ASM replication.

    so i would say that i don't see pros for ASM at this moment (excep performance - but lets forget about this at this moment). but performance is not only one think which we should take under consideration during deployment. there is much more factors to consider - if company possess unix/linux and storage specialist, it is completely unreasonable to do duplication on ASM level and store datafiles in diskgroups. if they don't have FC infrastrucre ther is not possible to use it's advantages.
  • 6. Re: NFS vs ASM as storage - which is better. considerations.
    damorgan Oracle ACE Director
    Currently Being Moderated
    I'm not sure what you wrote computes ... it doesn't for me.

    The choice of SAN or ASM is not a choice one need make. The choice is NAS or ASM on the SAN.

    And from my experience everyone is pretty much in your bucket ... they have an existing corporate SAN so they use it. What they often don't realize is that that corporate SAN, almost always EMC, is giving poorer performance than they would get with ZFS or NetApp with a substantial financial savings to the organization. ASM is great ... but shared SANs leave a lot to be desired in most, not all, cases.
  • 7. Re: NFS vs ASM as storage - which is better. considerations.
    damorgan Oracle ACE Director
    Currently Being Moderated
    I rarely disagree with you Billy but on this one I must ... NFS was a problem a decade ago ... not today.

    If you are coming to OpenWorld, and I hope you can make it this year, attend session CON 5101 and Hans Forbrich and I will be talking about ZFS as well as ODAs.
    http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-appliance-software/features/index.html

    I've been running production loads on everything from EMC VMAX to NFS mounted NetApps. I've yet to see the oft referenced issues with respect to NFS and storage. As I said above ... a decade or more ago there were issues but I've not seen them in my consulting practice since before 9.2.0.4 and that was a very long time ago.
  • 8. Re: NFS vs ASM as storage - which is better. considerations.
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    Daniel, the problem is IP. It is a very slow and very clunky wire protocol suite for use as a storage protocol.

    RFC5661 (Network File System (NFS) Version 4) specifically states that UDP must not be used and TCP is mandatory. TCP has inherent latency and overheads. UDP is faster (less overheads), but stateless without error correction.

    The IP MTU size for standard Ethernet is 1500 bytes. Which is tiny. If your disk block size is 2K, it means 2 Ethernet packets are required and space wasted in the 2nd packet, per data block.

    Add to this that Ethernet is typically deployed as a shared medium - and that storage packets will vy for bandwidth with the packets from applications, mail, web surfing, etc.

    Yes, this can be addressed in some respects by using dedicated Ethernet infrastructure - or configuring QoS classes on routers to run storage protocol packets via the RT (real time) diff serve queues. But these are not addressing the root problem - that Ethernet and IP were never designed to serve as an I/O fabric layer.

    FCoE (running Fibre Channel protocol over Ethernet) is another way that the industry is trying to address this. But 10Gb Ethernet itself fails to compare with the Infiniband roadmap, with existing Quad Data Rate (QDR) speeds of 40Gb.

    Infiniband is designed as a I/O fabric layer with protocols like RDMA (Remote Data Memory Access) and SRP (SCSI RMDA Protocol). Port-to-port latency from one Infiniband QDR port to another QDR port can be below 70ns (as used by NYSE for example). In Exadata, that is likely between 100 to 300ns.

    I/O scalability is critical with RAC.

    For example, we run SRP with 65Kb MTU frames over an old SDR (Single Data Rate/10Gb) Infiniband infrastructure for a 9 node development RAC, to 2 storage servers that are physically mirrored via ASM.

    I honestly do not see how (Ethernet-based) NFS can compete in this respect. But then we also process huge volumes of data on our production RACs. Tue for example was the busiest day this week so far. We saw 42,138 rows/sec insert rate of base data into a RAC for that day (rows that in turn need to be processed using selects, with merges and more inserts into other tables). Thus my views are based on really pushing the I/O fabric layer pretty hard.
  • 9. Re: NFS vs ASM as storage - which is better. considerations.
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    damorgan wrote:

    And from my experience everyone is pretty much in your bucket ... they have an existing corporate SAN so they use it. What they often don't realize is that that corporate SAN, almost always EMC, is giving poorer performance than they would get with ZFS or NetApp with a substantial financial savings to the organization. ASM is great ... but shared SANs leave a lot to be desired in most, not all, cases.
    Agree with the basic sentiment on SANs. Overpriced. With performance issues.

    What often happens is that FC ports are "oversubscribed". 6 x 4Gb FC connections from servers could go into a single 8Gb FC port on the SAN switch. Which means that despite your 24Gb combined I/O pipe from your servers into the host switch, that is effectively scaled down to a single 8Gb pipe into the SAN.

    Part of the problem is FC technology IMO. And there are better SAN-type solutions that are based on Infiniband instead. But then vendors with hidden agendas, are creating a lot of FUD around Infiniband and its use an I/O fabric layer.

    However, one only has to look at Exadata (a fairly cheap solution to put together ito h/w infrastructure) to see that Infiniband based storage systems are very capable - and can be significantly faster and more scalable than FC-based SAN (or IP-based NAS).
  • 10. Re: NFS vs ASM as storage - which is better. considerations.
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    piotrtal wrote:

    i didn't want to talk about redundancy and performance because
    - we can always have external redundancy on physical storage device which should be easier to maintain and performance better.
    Not true. Performance and scalability are directly related to the architecture you use. You cannot fix I/O performance by, for example, getting faster disks - as the problem could be latency in getting that data block from the disk shipped to the database server's memory.

    There are a number of moving parts in the I/O fabric layer. And you need all of these to be scalable. It is of little use if a number of moving parts are not performing and not scalable.
    about NFS vs ASM performance i won't discuss, because i don't possess data about it's difference with performance. nfs is probably worse but main role in this plays IP protocol like one of my precedesor said. but what if company doesn't poses FC infrastructure but only IP (x-Ethernet)?
    It is not NFS vs ASM. It is NFS (storage protocol) versus FC storage protocol or FCoE or SRP. It is Ethernet vs Fibre Channel or Infiniband. ASM DOES NOT HAVE A PERFORMANCE ROLE. ASM does not write to disk on behalf of the database. ASM is a management layer.
    i could only presume that ASM is better but question is: "it is so much performance better that it is worth to install ASM instance on each node to maintain disgroups, and diminish capacity of each node at the same time?" each of ASM instance utilizes ressources (memory/cpu) on node and it could be considered as disadvantage.
    ASM provides a superior management. Simple example - I have over the past years (and as recent as 2 weeks ago), migrated database storage from one storage system (SAN) to another. Without a single second of database downtime.

    Not possible using NFS/NAS as far as I know.

    And the server overheads for using an ASM instance is small - and does not impact the database instance on the server's of today.
    even if we consider extensive remote RAC installation (one node is far /distance/ from the other) and we have read-preffered storage parameter set for each ASM instance i wouldn't agree. the same we can obtain on harware level. EMC storages could also replicate between sites, and i don't believe that this hardware replication is worse than software ASM replication.
    Sorry - I simply do not believe in using RAC where RAC nodes are long distances from one another, or from the clustered storage. That is IMO a nonsensical approach to RAC.

    If this type of architecture is needed, look at Hadoop and related technologies. Not Oracle RAC in its current (10g/11g) form.
    if they don't have FC infrastrucre ther is not possible to use it's advantages.
    ASM does not need FC infrastructure. ASM manages raw devices for database storage. ASM does not care whether that raw device is a SAN LUN via FC. A local raw disk. A raw device via a SRP LUN. A raw device via SCST target. Etc.

    There is no ASM vs NFS issue.
  • 11. Re: NFS vs ASM as storage - which is better. considerations.
    userTK421 Newbie
    Currently Being Moderated
    Billy,

    Based on your statements above, would dNFS make a difference with your stance on NFS vs ASM. Considering...ZFS appliance, 10GB ethernet...is ASM still a better alternative to dNFS ?
  • 12. Re: NFS vs ASM as storage - which is better. considerations.
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    Please ask your questions in your own thread (refer to another if needed). That way you not only "own" the discussion thread, but also have a bigger audience. Not many members will actually look at an old thread that a single recent update. Many members will look at a brand new thread.

    Then there's also the issues of hijacking someone else's thread to ask your question, and resurrecting old done and dusted threads, that are frowned upon.

    To answer your question - it is and never has been ASM vs NFS or anything else. ASM is a management layer. If you want to compare something like NFS or dNFS, you are comparing storage technology and I/O fabric layers. ASM works on top of this.

    As for dNFS - my issue is and has been the storage protocol. The scsi command set (supported by the physical disk) are send to that disk via a protocol layer. So a fast protocol with minimal latency is needed.

    What is the fast protocol with minimal latency in IP? UDP. Not TCP.

    What does NFS and dNFS use? TCP. Not UDP. (for understandable reasons)

    What is the typical speed of the Ethernet layer that runs NFS or dNFS? 1Gb. Maybe 10Gb. What is the speed of old and new Fibre Channel technoloy? Dual 2GB and now dual 8Gb. What is the speed of old and new Inifiband technology? Dual 10Gb and now dual 40Gb.

    What is the typical nature of the Ethernet layer? Shared. Not dedicated. And for Fibre Channel and Infiniband? Dedicated. Not shared.

    So running storage via TCP over shared 1Gb Ethernet.. that is what makes absolutely no sense to me. If Ethernet is your choice as I/O fabric, then it needs to be 10Gb (preferable dual 10Gb via a bonded interface) and dedicated (storage protocol only, no mail, no web, no anything else running over it). IMO.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points