This content has been marked as final. Show 7 replies
I am running 18.104.22.168 on Solaris 10.
My database is about 560G total, of which 400+Gb are clobs.
My full backup takes approx. 1hr and 40 minutes "as compressed backupset".
My database is on the SAN, and my backups write to an NFS mount to another server which is mounted on the SAN.
Hope this helps...
Thanks for the info. Do you see the the same as me regarding the size of the backupsets, i.e not much smaller than the overall DB size. I guess in your case you may get some compression on the 160GB non-clob data.
Have/would you try the backup without compressing to get some timings?
You have some options:
->Change the type of compression with RMAN, it will be faster but less compress
->Use incremental Backups and use a change trancking file, to backup only the block that recently change.
->Use allocate channel to use parallelism
->Turn on optimization parameter
I hoppe it help you
We are having the exact same issue running in a windows environment. We have one bigfile tablespace holding LOB data. Our database size is 178 GB and the bigfile tablespace is 113 GB. Our compressed backup takes 4 1/2 hours to run. We've played with the parallelization and cut about 20 minutes off the time but that's still way too long. We're still looking for a good option. You're not alone.
Wich version of DB ?
In previous versions of RMAN, a bigfile tablespace took a long time to back up because RMAN could only process the bigfile tablespace serially.
In Oracle Database 11g, RMAN can back up large datafiles as a multisection backup, leveraging multiple output devices (multiple channels either to disk or tape) to dramatically reduce the time it takes to back up the datafile
If its like my scenario you're probably not reducing the backup size by much. Have you tried running the backup without compression to see what time you get. In my case I only ended up using a little bit more disk space but went from 75 minutes to < 10 minutes. I've now implemented this strategy across all of our databases that mainly have LOBS and they have all gone from > 60 min to < 10 minutes without a large increase in disk usage so its a win win for us.
As side note I tested a backup against a database that does compress well and found that with compression off it took twice as long. I assume in this case throttling back the I/O by placing more load on the CPU benefits the backup time. This is something I had observed a while back also. It's possible I may need to change the channel allocations when changing compression vs none but as a like for like, compression was faster. The DB was about 300GB and compressed to 40GB.
I would like to add that my reason for raising this thread was not that I thought I had an issue but that it was an observation and wanted to see if others see the same regarding LOB database backup times. It sort of makes sense that RMAN can't reduces LOBS as you could be storing images,pdfs,binary objects,etc that may not compress much so RMAN is just wasting CPU time trying to compress a block only to write it back out a similar size. May as well just pass it straight through to the I/O in this case.
In your bigfile case you can as mentioned use the SECTION SIZE although I'm sure I read about restore issues on solaris so check it out.
Other things that have helped us in the past.
Backups run faster with filesystemio_options=setall not sure if this applies to Windows.
Another DB not using setall had an issue with file fragmentation (even on Linux). We defragged the files (move out and move back) and the backups ran much faster.
I posted below to another thread I had open. It didn't change the size of the backup but made a huge difference on the amount of time.
To take advantage of our processors configure default:
New command, with the above configuration, took our compressed backups of a 175 GB database, with a 113 GB tablespace with LOB data, down from 4 1/2 hours to around 20 minutes.
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 22;
backup as compressed backupset section size 2500M database plus archivelog;