It's not just the total size of the catalog but also the way the individual hosts catalog files are manipulated. During a catalog update it has 2 copies of the hosts catalog file.
What sort of size are you looking at? I have an admin server with 220 clients and the largest backup client (about 5TB) has a 12GB index file. In total my admin server is using 120GB disk space and 65GB of that is catalog data.
Due to the catlog operation copying individual index files, it is important to make sure you have the fastest possible configuration for the disks.
so for the largest client you talk about, once its backup has completed and the catalog is being updated, it now needs 24 GB of space for OSB to update the host's own catalog file?
Of the 5TB of data, do you have an idea roughly how many files that might include?
Going back to my original sizing question, if I look at Veritas NetBackup Backup Planning and Performance Tuning Guide, for example, there are two methods described there that assist in sizing the amound of space needed for the NetBackup catalog (pages 28 - 31). Both methods use as their basis a 3180 bytes catalog size for each file retained. Do you know of a similar figure?
To your point about hosting the catalog on fast disks, are there any recommendations as to particular IO characteristics for them? For example, would hosting the catalog on an NFS share be a good idea (if it was provided on a 10 GbE connection, for example)? Or would internal disks or even LUNs from FC-SAN attached arrays be better?
I've never tried having /usr/local/oracle/backup on an NFS mount. Even with 10GbE the NFS overhead and locking could cause issues. I've used both local SCSI drives (RAID) and FC attached volumes.
It will always work, but when you have a significant catalog, such as this one, copying a 12GB file is going to take a little time. The faster your disk access, the faster that will finish. Yes it creates a copy, so while doing the update that will use 24GB space.
I generally spec 1TB space for /usr/local/oracle/backup and know that is plenty for most situations.
Remember though that your media servers, if you have some as well as the admin server, will also hold log files of the backup jobs, so there is a disk space consideration there too.
To your comment about the backup job log files on the admin/media servers, what kind of space do you normally reserve for this, and what factors affect its size and growth rates?
Can they be automatically controlled by OSB, for example, but setting some parameters for log retention after successful backup completion, or would the management (and removal) of them be a purely manual function?
Are they only required to debug a failed backup for example, of are there other reasons to keep them for any length of time?
The length of time the job transcripts are kept for is controlled by the policy and you can set that to as many days as you want. Personally I keep the job transcripts for the entire duration of the retention policy (3 months in my case) so I can always review the full backup details. Without the transcript you can only really see if it was successful or not.
So with debug turned off the job transcripts aren't that large. My largest environment as detailed above has a 1.2G transcript folder (/usr/etc/ob/xcr).
Hope that helps. As I said, the media servers will also contact logs in /usr/etc/ob/xcr, just something to keep in mind.