GI Version: 22.214.171.124
Platform OEL 6.3
We have two extra NICs on our RAC machines
We are planning to
bond eth0 and eth1 to form bond0
bond eth2 and eth3 to form bond1
bond0 will be used for the public interface
bond1 will be used for the private interconnect.
But, will grid installer (GUI) recognize this bonding ? I have this feeling that grid installer will only see eth0, eth1, ...etc.
You can use bonded NICs for public and/or private networks no problem. If you want to! With the current release, it is probably better to expose the physical NICs to Grid Infrastructure for the interconnect, at least. HAIP will load balance across them.
I always demonstrate both techniques when running training courses, we have some scheduled if you are interested,
Oracle Certified Master DBA
As JohnWatson said, yes you can use bonded NIC for public and private network.
In other words it is best practice for network configuration for RAC.
To bond two network interfaces for failover.
Mahir M. Quluzade
The installer does not recognise specific types of interfaces - it selects what it thinks are the most appropriate public and private (Interconnect) interfaces based on things like subnet mask (and hopefully the routing table too).
As installer/sysadmin, you need to make the final decision as to what interface need to be used as the public interface, and which one as the private interface for the Interconnect.
In my view, the Interconnect interface must always be a bonded interface.
A broken cable on the public interface will not down that node. A broken cable to the Interconnect interface will - unless there are dual cables into a bonded Interconnect interface, and the 2nd cable is still fine (which is usually the case). A node eviction due to a broken bonded cable or switch port, is a silly reason for loosing that node, when it could easily have been prevented via proper redundancy.
I'm not going to say that I disagree with this - BV knows RAC inside out and backwards - but I do think it is debatable.
GI 11.2.x can manage up to 4 interconnects with its HAIP protocol. As far as I have been able to tell, it spreads traffic across them all, and handles failures with no problems - though of course the interconnect will be running at reduced bandwidth. Why would you not let Uncle Oracle handle this? Or of course one could present multiple bonded NICs to GI, and let HAIP balance its traffic across them.
I have heard (no, I can't give references) that in some circumstances, particularly when using mode 1 (active/backup) bonding rather than mode 0 (balance round robin), the bonding module may not react fast enough in the event of failure, so GI detects the error first. If this is likely, then bonding would not be as reliable.
Well, I do have a reason for my view John.
The reason I prefer bonding is that it is done at o/s level. This means any IP software I run, can run over that bonded interface - without even having to know what bonding is. I do not need a separate (and complex) s/w stack for that.
The reason why I (or the suspicious pessimist in me) see HAIP developed by Oracle is because many(?) build clusters using Gigabyte Ethernet as Interconnect architecture and then run into scalability issues. So instead of using a better architecture (called Infiniband), more Gigabyte ports are thrown at the problem. And certain vendors just love this approach to scalability...