I don't have any idea about the OmniOS w/ZFS ...
but i will explain to you the difference between iSCSI and NFS from Oracle VM Server.
- NFS: is a commonly used file-based storage system that is very suitable for the installation of Oracle VM storage repositories.
Since most of these resources are rarely written to but are read frequently, NFS is ideal for storing these types of resources.
Since mounting an NFS share can be done on any server in the network segment to which NFS is exposed, it is possible not only to share NFS storage between servers of the same pool but also across different server pools.
=> In terms of performance, NFS is slower for virtual disk I/O compared to a logical volume or a raw disk. This is due mostly to its file-based nature.
For better disk performance you should consider using block-based storage, which is supported in Oracle VM in the form of iSCSI or Fibre Channel SANs
- iSCSI: With Internet SCSI, or iSCSI, you can connect storage entities to client machines, making the disks behave as if they are locally attached disks.
=> Performance-wise an iSCSI SAN is better than file-based storage like NFS and it is often comparable to direct local disk access. Because iSCSI storage is attached from a remote server it is perfectly suited for a clustered server pool configuration where high availability of storage and the possibility to live migrate virtual machines are important factors.
So, To create a Backup/snapshot for your VM, is recommended to use the OCFS2 file system (with iSCSI) to take a snapshot with the OCFS2 reflink based on OCFS2. and then move your snaphot to an other repository based on NFS.
I hope this can help you
Thank you for the detailed reply. The OmniOs = Solaris 10+; but I'll admit, using the same distro across VM to FS ie OCFS would eliminate some administration tasks.
Not it is not Solaris.
It is a fork from IllumOS, and is only vaguely related to OpenSolaris, OpenIndiana, andSolaris.
If you've chosen Oracle's VM technologies, it seems prudent to me to also use Oracle's Solaris instead of trying to shoehorn some other OS into the mix, in the hopes that it will work.
You actually asking two different things here. First question would be about the FS to use for your server and storage repositories. Since you'd need a shared fs, you can go with NFS or OCFS2. Choosing OCFS2 would also mean, that you'd need some kind of shared block storage like iSCSI or FC, while using NFS might be a easier to setup initially.
The 2nd question is about the backend system. I love Solaris-based systems, be it Solaris or OmniOS. You should go with the backend, that you already know the best, so that you can focus on the other tasks when setting up the VM cluster. I personally prefer iSCSI, since it has always delivered excellent performance for me. You should also note, that people have complained about NFS in Ilumos-based distros - at least this is what I have noticed on the ZFS mailing lists.
I actually run two server pools and three different storage pools on my VM cluster:
- one server/vm pool on FC LUNs, running off a single RAID box
- one server/ 2 vm pools on iSCSI LUNs, where one vm pool runs off a ASM cluster iSCSI target, while the other runs off a drdb/pacemaker iSCSI target
What I have learned is, that I/O is far more relevant than raw throughput, so whatever type of storage you'll be setting up, you should keep that in mind.
Hmm… so, you're on a budget, which is constrained. But if you're planning to run VMs for customers, you surely have to implement some degree of HA, don't you?
The highest costs will come from the redundant storage you will need, so choose wisely, how you will setup that storage. You can run either NFS or iSCSI with virtually no additional cost and get it HA-ready, if you have the skills to do so. What capacity are you planning to deploy for your VM storage repo?
HA is a concern and required before I'd offer the solution. We have redundant firewalls; switches and ISP UP Links.
Our storage SAN hardware is a basic SuperMicro box with 2x PWs, (2) SAS controllers and multiple NICs. We planned on using OmniOS or Nexenta for ZFS and create multiple ZPools/ZDevs. While I can't afford another SAN (yet); we're trying to build in enough hardware/software redundancy.
For ISCSI multipathing, do you recommend we VLAN the switch or just IP address each vnic
I can't see, how you are getting you storage redundant with only one box. Please keep in mind, that you will need some synchronization between the two storage boxes, once you will deploy the 2nd SAN. I think I saw a primer on ZFS/HA the other day on the Ilumos ZFS mailling list, written by Saso Kiselkov. He develpped a stmf-ha script that may be of interest to you. You can find it here: https://github.com/skiselkov/stmf-ha
Regarding the network setup: I am running my iSCSI multipathing even on the same subnet and I can't see any advantage in creating VLANs or different subnets for that. Since iSCSI multipathing with multipathd, resolves that paths due to the SCSI IDs of the LUNs it scans, there's no need to put the targets into different subnets.