This content has been marked as final. Show 6 replies
The local filesystems on each DB node come from local storage that is only connected to that DB node (it is literally internal to the machine). The only way to "share" that space is by some cross-mounted NFS scheme. I strongly recommend against that approach since it is riddled with HA issues and is likely to cause system hangs if implemented incorrectly. Also, whenever systems are rebooted, the mounts will be stale (at least for a while) and if you're depending on those files, you'll certainly see issues.
If you need a cluster filesystem, I'd suggest evaluating DBFS on Exadata (MOS 1054431.1) or some external NFS server/appliance like the ZFSSA.
but on DBFS there are some restrictions (no executables files, no pipe unix files ...)
We have thought about external NFS server but it's a pity to not use unsed db nodes spaces.
DBFS doesn't have those restrictions inherently - those are based on the mount options you use when mounting it on Linux using fuse. If you mount it without the direct_io option, you can execute files from DBFS just fine. See the notes in 1054431.1 related to multiple filesystems as I think I've included some discussion about that in there.
But DBFS means uses Storage server space not DB server space.
I thought you were looking for a cluster filesystem, so I'm providing you with an option for that.
The only way to (safely, in my opinion) use the local space on DB nodes is for local storage on that DB node. Are you really worried about 300GB? If space is that tight, there are likely other things to consider :)
:D thank's we'll keep our NFS solution.