I've just checked the support matrix and 3.2 11/09 does not support 11gR2, so if you want to install that you'll need to get 3.3.
However, one question - what is the driver behind using guest LDoms? Did you consider simply using zone clusters instead? A zone cluster will out-perform a guest LDom simply because there is a lower overhead for I/O.
The motive behind using LDOm Guest domains as RAC node is to have a better control of the resource allocation, since i will be having more than one guest domains which should perform different functions. The customer wants to have ORACLE RAC alone (without sun cluster).
I will have two T5120's and one 2540 shared storage.
My plan of configuration is to have
Control&IO Domain with 8VCPU, 6GB mem
one LDOM guest domain on each physical machine with 8 VCPU's, 8GB of memory, shared n/w and disks participating as RAC node's. (Don't know yet if i will use solaris cluster or not)
one guest domain on each physical machine with 12 VCPU's, 14GB of memory, shared n/w and disks participating as BEA weblogic cluster nodes (not on solaris cluster)
One guest domain on each physical machine with 4 VCPU's, 4GB of memory,shared n/w and disks participating as apache web cluster (on solaris cluster)
Now, My question is, is it a supported configuration to have guest domains as Oracle RAC participants for 11gR2 (either with or without solaris cluster).
If I need to configure RAC nodes on solaris cluster, is it possible to have two independent clusters on LDOM , one 2 node cluster for RAC and another 2 node cluster for apache web?
You can do resource control with zone clusters too, that is, limit CPU and memory.
While I believe your configuration is viable, that is, one pair of Guest LDoms in a Solaris Cluster running (scalable) Apache Web server and one pair of Guest LDoms without Solaris Cluster running Oracle 11gR2 RAC, I still feel that running it under a single cluster with multiple zone clusters is simpler. For a start, you only have two O/S instances to manage. You also have a higher degree of memory sharing, whereas with the multiple LDoms, each O/S and associated binaries will have separate memory footprints. Furthermore, the guest LDoms have a longer I/O path through the virtualization layer and if you only have one control domain per system and it failed, then all the Guest LDoms on that cluster hang. On the other hand, the separate O/S instances do give you separate points of control, that is, the ability to run separate O/S versions.
Anyway, it just shows there are always many ways to achieve the same goal :-)
Yes, that is possible, though you've doubled your OS instance count for that eventuality. The other option could be to consider use of some Guest LDom clusters and other zone clusters. Really, all I'm concerned with is that you make your decision when you have all the options in view.
Sun Blue print 820-7931-10 discusses the configuration in detail , in which they add nxge2 & nxge3 into an IPMP group and assign it to a vswitch and use routing to make the private interconnect for oracle RAC. In my case I need to have another guest domain which will participate as a solaris cluster node. kindly help me in understanding how I will assign the cluster heart beat for solaris cluster? Because both nxge2 & nxge3 are used by oracle cluster interconnect for RAC nodes, I am not sure if I can use the same interfaces for solaris cluster interconnect for another guest domain, or Do I need to add another Dual Gig Ethernet card?