This content has been marked as final. Show 6 replies
This is not supported. The private network has to be private: It may only share networks on switches via. VLAN if a Quality of Service can be guaranteed. I
But even then best practice is it to keep it separate.
I also doubt that you will be able to install GI, since it does not allow you to specify a :X interface. It only accepts bond0 or eth0.
Here a short explanation, why this is not a good idea: You think by this kind of bonding you will get higher availability. But the opposite will be the case: If the public network gets a high load (due to high load (e.g. insets), it probably will saturate your network bandwith. As a result the private interconnect cannot communicate anymore: Leading a.) To slow performance and especially b.) to node evictions.
So even if best practice states to bond network interfaces for public and private, that if you cannot guarantee "private" network it is better to go with a single card for public and a single for interconnect.
Thanks for Response,
We have 2-NIC which have same speed of 1Gbps and we had implemented bonding on that so we got single bond interface for rac installation.
We have install and deploy the Oracle11g r2 RAC/GRID using single bonded NIC and that is install and working but now our this site is going to
to live Production site.
So we have to conform that whether it is supported and working in Production Environment ??
Can you Plz provide the advantages and disadvantages with details examples?.
Thanks & Regards
Edited by: hitgon on Sep 20, 2011 11:15 AM
This is one case where I would say that you are playing a very dangerous game with your production system. You asked for Expert opinions and you have been informed that this is a VERY BAD IDEA!!!! While you think that it works, don't come asking about node evictions when your bonded nics get saturated. I can say that I am an expert having installed, configured and spent time troubleshooting more than 75 clusters (2-6 nodes) on some very impressive hardware. The "big one" was 250TB on a 3 node RAC on Sun 6900's (48 dual-core x 192GB main memory with 8 NICS using SUN IPMP and 8 HBA's for SAN connectivity ). When you start having "weird" issues, Oracle will not support your configuration. You will need to fix it before they even begin troubleshooting it. Tell your manager that unless they spring for the appropriate configuration they should execute the following command: "Alter manager update Resume;" because it is not "IF" it will fail, but "WHEN" will it fail. Trust me, you and your managers have put your system in a very precarious position.
very similar background as onedbguru here and same response as onedbguru. You are asking for trouble with this setup. Single bonded nics can be prone to failure on heavy loads (I have seen many instances of flapping nics due to dropped packets on active networks or from switch problems during loads).