5 Replies Latest reply on May 19, 2011 12:36 PM by rdoogan-Oracle

    OSB Catalog Sizing & Other Catalog Questions

    trickster - oracle
      I'm new to OSB, so was trying to find any sizing information on how much disk capacity I need to consider for the OSB catalog on my Admin/Media server.

      Obviously typical metrics like:

      ■ The number of files being backed up
      ■ The frequency and the retention period of the backups

      has a direct influence on the size and growth of the catalog, but are there any formulas which can be used to estimate required capacity, based on these (or other) metrics?

      Does the catalog ever get compressed (or even deduplicated) on disk during it's lifetime?

      Also, are catalog backups written to tape in the same way that normal OSB backups are? And what happens if the size of the catalog exceeds a single piece of media?
        • 1. Re: OSB Catalog Sizing & Other Catalog Questions
          It's not just the total size of the catalog but also the way the individual hosts catalog files are manipulated. During a catalog update it has 2 copies of the hosts catalog file.

          What sort of size are you looking at? I have an admin server with 220 clients and the largest backup client (about 5TB) has a 12GB index file. In total my admin server is using 120GB disk space and 65GB of that is catalog data.

          Due to the catlog operation copying individual index files, it is important to make sure you have the fastest possible configuration for the disks.

          Hope that helps.


          1 person found this helpful
          • 2. Re: OSB Catalog Sizing & Other Catalog Questions
            trickster - oracle
            Thanks Rich,

            so for the largest client you talk about, once its backup has completed and the catalog is being updated, it now needs 24 GB of space for OSB to update the host's own catalog file?

            Of the 5TB of data, do you have an idea roughly how many files that might include?

            Going back to my original sizing question, if I look at Veritas NetBackup Backup Planning and Performance Tuning Guide, for example, there are two methods described there that assist in sizing the amound of space needed for the NetBackup catalog (pages 28 - 31). Both methods use as their basis a 3180 bytes catalog size for each file retained. Do you know of a similar figure?

            To your point about hosting the catalog on fast disks, are there any recommendations as to particular IO characteristics for them? For example, would hosting the catalog on an NFS share be a good idea (if it was provided on a 10 GbE connection, for example)? Or would internal disks or even LUNs from FC-SAN attached arrays be better?
            • 3. Re: OSB Catalog Sizing & Other Catalog Questions
              The 5TB backup has over 81 million files.

              I've never tried having /usr/local/oracle/backup on an NFS mount. Even with 10GbE the NFS overhead and locking could cause issues. I've used both local SCSI drives (RAID) and FC attached volumes.

              It will always work, but when you have a significant catalog, such as this one, copying a 12GB file is going to take a little time. The faster your disk access, the faster that will finish. Yes it creates a copy, so while doing the update that will use 24GB space.

              I generally spec 1TB space for /usr/local/oracle/backup and know that is plenty for most situations.

              Remember though that your media servers, if you have some as well as the admin server, will also hold log files of the backup jobs, so there is a disk space consideration there too.


              • 4. Re: OSB Catalog Sizing & Other Catalog Questions
                trickster - oracle
                Hi Rich,

                back again.

                To your comment about the backup job log files on the admin/media servers, what kind of space do you normally reserve for this, and what factors affect its size and growth rates?

                Can they be automatically controlled by OSB, for example, but setting some parameters for log retention after successful backup completion, or would the management (and removal) of them be a purely manual function?

                Are they only required to debug a failed backup for example, of are there other reasons to keep them for any length of time?
                • 5. Re: OSB Catalog Sizing & Other Catalog Questions
                  The length of time the job transcripts are kept for is controlled by the policy and you can set that to as many days as you want. Personally I keep the job transcripts for the entire duration of the retention policy (3 months in my case) so I can always review the full backup details. Without the transcript you can only really see if it was successful or not.

                  So with debug turned off the job transcripts aren't that large. My largest environment as detailed above has a 1.2G transcript folder (/usr/etc/ob/xcr).

                  Hope that helps. As I said, the media servers will also contact logs in /usr/etc/ob/xcr, just something to keep in mind.