Before tuning NFS you need to find out what the bottlenecks are. Trying to adjust NFS parameters is most likely not going to have the effect you expect. 99 GB in 8 hours is about 200 MB/minute. That’s not exiting, but without knowing what type of network you have, what else is using the network and how busy the server is, it s not possible to say this is bad performance.
Perhaps you can try to backup to a local directory and then copy the file to see if that improves performance or shows where the bottleneck is. Maybe you can backup to a USB drive. What I wonder though is why you have 99 GB of data. A standard new system installation is about 3 GB.
Thank you Dude,
I think our network and nfs appliance are not the bottle neck, with a backup agent the throughput is fine and the same server backsup in an hour. That is why I want to look closer at TAR and the NFS options. There is an Oracle home and some export files that I am including in the TAR which accounts for the size.
Was your backup agent backing up the same amount of data, or was it perhps doing an incremental backup or copying disk blocks based on a volume snapshot? It can make a huge difference for data throughput whether you copy a lot of small files ore large files. Configuring options to copy large files may cause overhead for small files and vice versa. For instance, is you network using Jumbo frames? That's why I was suggesting to try tar to a local disk and then to copy the tar archive to see if there is a huge difference.
I did the TAR to local disk and it only tood 45 minutes for the same data. I looked further into the mount options for the NFS mount and I was able to determine that the option noac required for oracle seemed to be the one that caused the slow performance.
I have created a second NFS mount to use with my TAR.