I need some help getting past a problem situation. I have an Oracle 10g database on Solaris - good RMAN backup from a few days ago. We lost one datafile due to storage errors. No problem - restored it from RMAN, all went fine. Started rolling forward with our archive logs, and ran into an issue with one log: it's also corrupted, can't be read. So we are stuck with the datafile recovery at that point (a couple of days ago). The database is up; only this one file is offline. Everything else is fine. All activity in the db is suspended while we sort this out.
Mostly the datafile contained indexes, which of course can be easily recreated. I dropped those. A small portion of the datafile was actual data which I need to retain, but it's old data and the pending archive log changes should not matter much for this data, if at all. So what I'd like to do is bring the datafile online at the point where we were able to roll forward with the archive logs. Can I do an incomplete recovery just for this one file? If so, how?
It seems like this should be a straightforward procedure similar to 'open resetlogs' but I can't find the answer anywhere. Again, the database is up and all other files & tablespaces etc. are fine, so I don't want to touch them at all. If I can lose the pending changes for just this one file, I can live with that.
Any insights / suggestions would be GREATLY appreciated. Thanks in advance.
Set ALLOWRESETLOGS_CORRUPTION=TRUE. It is an hidden parameter which allows you to open the database in inconsistent state. I hope you are doing this on the different server. Recover it there, export the content you want and import that in your production environment. This link will help.
Thanks for the tip. That's an interesting idea, maybe something we might able to try as a last resort. I could do a complete restore of the db on a temporary system, open that one up with the hidden parameter and grab the data I need. Unfortunately it's a 5 terabyte database and we probably don't have storage available to make another copy of it ... it's looking like the most efficient option may be to lose the little bit of data left in the file (it will be dropped anyway once it's a few months old). I'll post an update with the outcome here ... if we have one soon ... Thanks !