Have serious doubts whether this will at all improve performance. A couple of issues that comes to mind.
Can you configure an I/O load balancing algorithm for a bonded network interface, like you can with multipath and dual fibre channel HBA ports? I have not seen any such option for bonded network interfaces.
TCP communication is serial. Not parallel. Packets are sequenced at destination and processed in sequence. So even if you send 500 packets near simultaneously, at the destination packet 1 needs to be processed before packet 2, and packet 2 before packet 3, and so on. So actual speed and latency are critical.
Network infrastructure determines basic network performance and speed. Having 10 x 1GigE interfaces as a singe bonded interface, does not change the 1GigE network into a 10GigE network.
QoS classes on routers determine routing priority and bandwidth. If enabled, a 100 interfaces bonded for performance will mean diddly-squat.
Networks are shared media. It may be 10Gb/s - but that does not mean your network application process is able to claim the entire 10Gb/s for itself.
If you want serious network performance, with low latency, there is only one answer. And it is the same answer as for the NYSE where microsecond (1,000,000th of a second) latencies are critical.
And the answer is Infiniband. HCA (dual port) cards provide QDR (quad data rate) speeds of 40Gb/s. Running into 2 redundantly configured Infiniband switches. You configure the dual HCA ports as network interfaces on Linux, and slap a bonded interface (for redundancy on top). You can run IP (called IPoIB) over it. And to make that really fast, you link your network applications to using SDP (Socket Direct Protocol) - which eliminates all the IP stack call overheads.
For a file server - you can make that an intelligent storage server. So instead of NFS or using FTP/SCP, you share the raw disks of the server on the fabric layer of Infiniband. This enables other servers to map these as local scsi devices and use Scsi RDMA (Remote Direct Memory Access) Protocol (SRP) to read and write directly to that remote disk.
- Infiniband is not expensive (may even be cheaper than 10GigE)
- Infiniband is the #1 Interconnect technology amongst the 500 fastest supercomputer clusters in the world
- Oracle uses Infiniband in their Exadata Database Machine
So it is proven, robust and mature technology.
For additional info:
A TCP/IP address is just a logical construct. Similarly, if you have a couple of mailing addresses, your mail won't arrive faster. In order to utilize 2 network cards and have them work as a team to increase performance, you have to configure network bonding with link aggregation. You will also need a managed network Switch to configure such an option. There are many configuration examples if you Google for RHEL link aggregation.
Thank you for the information. The key is whether a 10Gbit stream coming into interface #1 from one DB server, and a second 10Gbit stream coming into interface #2 from a second DB server will be additive - or the TCP bottleneck you describe is on the Kernel level - and the sum will still not exceed 10Gbit, no matter what.
We configured a second 10Gbit bond on the server - but have been unable to get both to work simultaneously - let alone additively. Each has its own IP - but only 1 seems to serve requests to either IP - at a time.
I saw discussions about "arp_filter" and the Kernel making dynamic choices as to which one to use - instead of the one explicitly requested. So at this point we are still researching how to get both interfaces to work concurrently to begin with.
Infiniband is definitely in our future - but for now this is just an attempt to squeeze more out of the current hardware.
We'll see what we discover. Thanks again.
Just wanted to report that we got this to work. We are able to Data Pump 20Gbit/sec into a flat file server thru 2 x 10Gbit connections with 2 different IPs - from multiple client DBs.
The following were key -
(1) The recently released UEK3 with its mature pNFS
(2) Using different subnets for the 10Gbit interfaces to force Linux to keep transmissions flowing into the "door" they were pointed at
(3) Ensuring that the 10Gbit interface cards are placed on motherboard slots that can handle 10/20 Gbit/sec
So - until Switch bonding and Infidiniband - the "best practice" approaches - become an option for us - this is the "simple and manual".
The fundamental feature making this happen is Oracle Data Pump's ability to round-robin-write to multiple output directories.
Thanks for everyone's tips and thoughts.