This content has been marked as final. Show 2 replies
We are using OCFS on a 4 node RHEL cluster to store 100's of 1000's of files. Mostly small files (containing config data of h/w devices we poll). Around 600GB used in total.
A file listing (the ls command) is a bad idea in such directory.. but this is true of most any file system with such file volumes. ;-)
No performance issues reported by development. BTW, we are running it using IPoIB (IP over DDR/10Gb Infiniband - we still need to wire the QDR/40Gb IB switches and move the OCFS heartbeat/Interconnect to faster Infiniband).
And I disagree with the statement that it is more complex than NFS. It is very simple to configure and use.
NFS is not a filesystem for a cluster and designed for a very different purpose. For instance, a clustered file system arbitrates and controls each node's access to a shared storage resource, preventing more than one node from writing data at the same time. Under NFS, the filesystem is not local and you cannot open a file for writing by more than one NFS client. Comparing NFS with OCFS to my understanding does not make much sense.