I am currently in the process of an evaluation. We are planning to run an OpenVZ Cluster (true hypervisors have too much latency/overhead for the required scenario) with a shared storage for all participating nodes.
The OpenVZ containers will be stored on the shared filesystem. Use case is mostly reads (executing one binary in each container, which may load bulk data into memory from the shared fs as well), practically no writes. Since every process will run in its own OpenVZ container directory, there will be no concurrent reads/writes (ie locking should not be much of an issue). There might be > 200 node clients, each node might have up to 20 openvz containers (though since these use the node FS this should not matter). We want to use a central and cheap storage (no distributed storage).
Since OpenVZ is RHEL based i am considering Oracle Linux (i assume the OpenVZ RHEL 6.x based Kernel will run without issues on Oracle Linux). Now considering the shared filesystem i could use NFS or OCFS2 on top of iSCSI (linux software based) over Ethernet. I read a few performance benchmarks, in which OCFS2 comes out on top due to a more scalable design. However i am not sure whether these benchmarks are all that relevant to my use case, since they take into account more "normal file system usage". NFS is a lot easier to setup and to maintain, however if OCFS2 performs and scales significantly better for my use case i would give it a try.
I would appreciate all input, since i assume there are quite a few people out there who are more knowledgeable with such use cases and OCFS2 in particular.
We are using OCFS on a 4 node RHEL cluster to store 100's of 1000's of files. Mostly small files (containing config data of h/w devices we poll). Around 600GB used in total.
A file listing (the ls command) is a bad idea in such directory.. but this is true of most any file system with such file volumes. ;-)
No performance issues reported by development. BTW, we are running it using IPoIB (IP over DDR/10Gb Infiniband - we still need to wire the QDR/40Gb IB switches and move the OCFS heartbeat/Interconnect to faster Infiniband).
And I disagree with the statement that it is more complex than NFS. It is very simple to configure and use.
NFS is not a filesystem for a cluster and designed for a very different purpose. For instance, a clustered file system arbitrates and controls each node's access to a shared storage resource, preventing more than one node from writing data at the same time. Under NFS, the filesystem is not local and you cannot open a file for writing by more than one NFS client. Comparing NFS with OCFS to my understanding does not make much sense.