This content has been marked as final. Show 6 replies
You've read the installation guides?
(the correct answer is "yes" :) )
You need to decide whether you want to install on the physical machines, or on virtual machines within each physical machine. There are many options for either approach.
Mahmoud_Rabie wrote:RAC requires two basic components - that together determines the robustness, performance and scalability of the cluster:
I have lab with OEL5.4 linux installed on all 26 PCs (nodes). I am newbie to RAC. I hope to install/deploy Oracle 11gR2 RAC over these nodes.
1) What are the clear/explained steps to do that?
2) Can I prepare/setup automatic/remote installation to simplify the process?
- Shared Cluster Storage
So the 26 nodes are just one component - and you still need to address your architecture ito of what you are going to use for Interconnect and cluster storage.
The Interconnect is used for cache fusion - basically this means that it provides a faster means for one node to read a data block already read by another node, before hitting the storage layer with a physical read to get that data block.
If your Interconnect is slow (running 1Gb Ethernet for example), then it will likely be slower than the I/O fabric layer - making a logical read from cache fusion slower than a physical read from disk. In such a scenario, you cluster will perform quite badly and not scale at all.
The Interconnect must be (as per Oracle) a private and dedicated network. For 26 nodes, you need as a minimum, a 26 port dedicated switch. For redundancy purposes, you need dual connections per node - thus a 64 port switch. On each RAC node, you need a dedicated dual port NIC for the Interconnect (used as a bonded interface by the Interconnect).
The two basic technologies to use for the Interconnect is 10Gb Ethernet or 40Gb Infiniband. The physical infrastructure will be the same. Dual port 10Gb Ethernet or dual port HCA card per server. 2 cables per server. And a 64 port Ethernet or Infiniband switch. Or 32 port if you want to forego Interconnect redundancy in order to decrease costs. The PCI NIC/HCA card needed for the Interconnect by default comes with dual ports - so no real cost saving by cutting dual port connectivity down to a single port.
Next item is the storage layer. What are you going to use for that? Are you going to use a fibre channel based storage array? In that case you also need another set of 26 dual port HBA PCI cards and cables for connecting to the fibre channel switch of the storage array.
Alternatives are using FCoE (Fibre Channel over Ethernet), SCST (Generic Scsi Target subsystem) or Direct NFS. In this case, your Interconnect can also serve as I/O fabric layer - assuming it has the capacity.
So 26 PCs actually means very little in building a cluster. A cluster is only as good as its Interconnect and Cluster Storage. Get that wrong and it does not matter what you use for cluster nodes - the cluster will not be robust, will not perform, and will be unable to scale.
Thank you for giving the best practice in building the RAC.
I think the first question can be answered by these guidelines and best practices beside the installation guide.
Now, let me re-ask the second question in other way?
Can I (remotely) setup or prepare all nodes from one node (let me call it (coordinator))?
As John said - the Grid Infrastructure installer is run on the 1st node only and it installs the s/w on all nodes. The same with the RAC RDBMS installer.
Also, when DBCA is used to create a database, it is done on one node only and instances are created on all selected/relevant nodes.
The only manual interaction required on all nodes is prepping the servers. I.e. creating the grid and oracle users, setting up trusted ssh connectivity for these o/s users between nodes, setting up shared device storage via multipath for example, configuring the bonded interfaces, and so on.
Refer to the installation manual for the Grid, RAC and o/s versions used.