Please check this:
Things to check before upgrade to 18.104.22.168 (Doc ID 1363369.1)
Things to Consider Before Upgrading to 22.214.171.124 Grid Infrastructure (Doc ID 1312225.1)
Oracle Clusterware (formerly CRS) Rolling Upgrades (Doc ID 338706.1)
126.96.36.199 Patchset Location For Grid Infrastructure And RDBMS. (Doc ID 1362393.1)
RACcheck 188.8.131.52 Upgrade Readiness Assessment (Doc ID 1457357.1)
Going from CRS 10.2 to Grid Infrastructure 11.2 is not the simplest of upgrades. It isn't the worst of upgrades either. But I had a number of issues that I had to sort out. The biggest thing was that 11.2 introduces the SCAN Listener. See Note 887522.1 SCAN Explained.
The mutlicasting requirement tripped me up for a short bit: Note 1212703.1 GI Startup May Fail DUe to Multicasting Requirement. I think this was fixed in one of the 11.2 patchsets so this may not apply to you.
The other thing is that after you upgrade your cluster software, you won't be able to bring up all instances of your 10.2 database until you pin the nodes. See Note 948456.1 Pre 11.2 Database Issues in 11gR2 GI Environment
It's an easy thing to do.
Some other reading that may be beneficial:
Note 1053147.1 11gR2 Clusterware and grid Home - What you need to know
Note 969254.1 How to Proceed from Failed Upgrade to 11gR2 GI on Linux/Unix
Note 810394.1 RAC Starter Kit (Platform Independent)
I want to emphasize what I consider to be the most important point last in my reply......Test, Test, and Test. With today's VMs available at your disposal, there is no reason why one can't set up a 2-node cluster with the old version and then practice the upgrade. Figure out what goes wrong, fix it, and then *document it*. Your documentation should be a step-by-step instruction guide. When it comes time to do this in production during your downtime window, that instruction guide should contain everything need to do so that the upgrade goes smoothly. Adequate testing is the only way to upgrade production without issues. Another nice thing about VMs is that you can take a snapshot of the old version. When things go wrong, revert to the snapshot and try again. When I started doing this, a failed upgrade often meant I had to manually remove all software then reinstall the old version. The benefit of this was that I got pretty good at cleaning things up and reinstalling. The big downside was the amount of time that snapshots can save. At one point, I came across the idea that even if I botched the cluster upgrade, I could wipe the cluster software and install anew and then get the database up and running. I blogged about idea here: Installing RAC for a Database with Datafiles &#187; Peasland Database Blog
It was nice to know that even if I screwed it up I had a fallback plan.
Agree with Brian
You can test and use this approach.
I used same technique in my past company.