We are upgrading our Solaris 10gR2 (10.2.0.5.0) RAC databases to 11gR2(188.8.131.52.0) and using the Oracle DB Upgrade Assistant )DBUA) as opposed to doing the upgrade manually. The upgrading our Solaris 10gR2 (10.2.0.5.0) 2-node RAC databases to 11gR2 (184.108.40.206.) and using the DBUS as opposed to doing the upgrae manually. The upgrades have been finishing w/o errir, but during the portion of the dbua where OEM is being upgraded, teh provgress sits at 87% from 2-2.5 hours. We are tight on our upgrade window and wondering how/if we can shorten this window. I realize that the dbua is automated, but am puzzled as to why the LONGEST portion of teh upgrade is focused on OEM. I wonder if it is related to these DBs being RAC as well since we have not noticed this issue much on our single node Linux DB upgrades. It amy not be related, however.
We always have all OEM services down prior to running the upgrade. We have both Grid agents and dbcontrol agents installed on these RAC DBs (the latter as a backup if we were to lose Grid Control). However, we never run both agents simultaneously and primarily use the Grid agents for our DB monitoring. In all cases, as stated previously, all OEM resources are DOWN prior to running of the dbua.
Thank you for any guidance.
This is the case why we choose manual upgrade path where you have more control over the process. I suggest using manual approach and do oem upgrade later after db upgrade. Then you could enable tracing of emca and have enough time to investigate why it failing.
How To Manually Upgrade DBConsole Configuration After Database Upgrade [ID 1293264.1]
How to Trace / Debug the EMCA Tool in 10g and 11g [ID 330689.1]
Thanks for the time to answer my initial inquiry. We have to use the dbua as it has been the approved and, thus far, adhered approach for all prior upgrades. Prior to this last set of upgrades, we had no significant delay in the OEM upgrade step of the dbua. In reality, the OEM upgrade portion is NOT failing, but completing correctly. We are just try to see if there are a few actions we could do up front to reduce the OEM upgrade time. Thanks again.
You would agree that in order to propose the action required to decrease time of OEM upgrade, we need to identify and understand why its taking so long, especially as you said, its not something you have seen in other environments.
So, have you got more detail where this time is spent? The upgrade process is updating SYSMAN schema, does it stuck on something running in the schema, blocking by some process?