This content has been marked as final. Show 36 replies
I am on Solaris 5.10 and just upgraded the database to 10g R2.
In my case I use an nfs-mounted fs as an Oracle - Directory which is used
by data pump. None of the mount options mentioned in this thread worked
so far on my Solaris 5.10 system. I did open a thread in metalink in order
to get that problem nuked once and forever. I will keep you informed.
The "noac" is a huge performance hit in NFS.
Eg Appling CPU2006Apr to 9i and 10g. Identical servers. One was finished in 40 munites. The other took 2.5 hours. This was just to back-out the previous CPU patch, load the new one , and reset the file permissions (patch 4516865 & 4533592)
I suggest someone tries: event="10298 trace name contect forever, level 32" in 10G to disable NFS mont point checking. I would still use "noac" and "forcedirectio"(solaris) where datafiles, redo logs etc are mounted, but for the plain Oracle binaries , no "noac" and no "forcedirectio". Eacj server has it's own copy of the binaries.
Complaints about NFS options during the 10g install?
Try "./runInstaller -ignoreSysPreReqs" .
Use these at your own risk.
I walked into the same issue, when creating 10gR2 database (10.2.0.3).
The "Create database" statement failed when trying to make a controlfile on a NFS mounted disk.
When I found this suggestion :
"I suggest someone tries: event="10298 trace name context forever, level 32" in 10G to disable NFS mont point checking."
I was interested if this would help me out,
Changed the init file and run the create scripts and yes the event will not check the NFS options and the database has been created successfully.
Oracle did not recommend to use this workaround (of course), but it helped me to create the database and wait until the system administrator has changed the NFS mount settings during maintenance hours.
Error is documented in Metalink Doc Id : Note:387700.1
ORA-27054 ERRORS WHEN RUNNING RMAN WITH NFS
To implement the solution, please execute the following steps:
From the erorrs that we see from the RMAN stack this looks like Bug 5146667
This behaviour has been observed on Solaris and AIX Platform.
As suggested in the bug the workaround recommended is to use the Event 10298.
1) set the Event 10298 in the init file
event="10298 trace name context forever, level 32"
if you are using the spfile then the following can be done
SQL> alter system set event='10298 trace name context forever, level 32'scope= spfile ;
Once you set the above parameter restart the instance check as follows
SQL> select name, value from v$parameter where name = 'event';
Event 10298 trace name context forever, level 32
1 row selected.
Then try the backups again
Check if a one off patch for 10.2.0.1 for your platform is available
Please follow next steps to download and test it :
1. Login to Metalink
2. Go to Patches and Updates -> Simple Search
3. Enter patch number 5146667 and platform you are on ( your version of OS )
4. Download the patch
5. Read the README file for installation instructions and test it to see if it fixes your problem
Once the patch is shown as applied - you can do away with the event
How HUGELY irritated am I? Been using NFS for Oracle since, well, before they let you.. Here are my current mount options; These fstab entries at the bottom are just an fyi.. my current 'pisser' is DATA PUMP and the destination directory object.
Why does Oracle require specific NFS mount options for a directory object? Or, more specifically, is there an option to bypass the 'special' test? I want to export to a floating, automounted location (/home/oracle) in this case. Yes, I realise the importance of directory objects, you know, being there and what have you, however, I would think it just as simple to throw an exception if it's not there as it is if the Oracle doesn't like the 'mount specifics', ....or not?
Is there an official way around this before I make up my own? To use data pump pointing to whatever location I want without having to architecturally juggle what should be pRetty darn simple? I should think, if they were serious about it, that I wouldn't have been able to create the directory object at all if it didn't like where it was yeah? Dump the pump and go back to 'exp' then?
Thanks for your time in reading my rant.
Any insight/help/factoids will be very much appreciated.
## Oracle Binaries
bt-na01-stor:/vol/oracle/product/sol_64/10203 - /opt/oracle/product/10 nfs - yes rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid
bt-na01-stor:/vol/oracle/product/sol_64/9206 - /opt/oracle/product/9 nfs - no rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid
## Oracle datafiles
bt-na01-stor:/vol/oradata - /oradata nfs - yes rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid,forcedirectio
## Oracle Admin
bt-na01-stor:/vol/oracle/admin - /opt/oracle/admin nfs - yes rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid
Message was edited by:
I'm now using my $ORACLE_BASE/admin mount for the DATA_PUMP_DIR directory objects, significantly less irritated now.
I use Nagios to schedule and drive and rotate my data pumps. Things I've found that suck; the JOB_NAME in lower case or with lower case characters is not something you can attach to. Messing with, in any way, the schema's datapump table or the datapump dump file renders everything clueless and all the pieces get stuck. I have yet to discover how to remove datapump job pieces that have failed (I've run several different scenarios). Seems a pretty fragile three-tiered setup, datapump that is. At least as far as cleanup and recover go.
Does anybody know how to clean up datapump jobs that have lost their heads? By heads I mean if the dump file is removed and/or the master table was renamed or dropped. Oh and when I drop the master table the DBA_DATAPUMP_JOBS view thinks the JOB_NAME is the recovery object (i.e. BIN$Qtb+amPsJ5zgRAgAINH2ow==$0 ) try attaching to that.