6 Replies Latest reply: Dec 27, 2013 5:27 PM by user5749629 RSS

    assigning multiple IPs to multiple 10Gbit interfaces to increase performance

    user5749629

      Hello gurus,

       

      Based on previous advise on this forum (which I could no longer find ???) - we created a flat file server using UEK2 to use as a destination for Oracle exports.

       

      It has 2 filesystems, mounted on 2 folders, a single 10Gbit NIC with a single IP - and client Oracle databases mount them on 2 folders with that single IP and run expdp with DUMPFILE=DIR1:FILE1,DIR2:FILE2 to spread the files across the 2 filesystems.

       

      This works great - we get 8-9 Gbit incoming on the flat file server.

       

      But we would like to go higher - so we added a second 10Gbit NIC with a second IP - and tried to mount each filesystem on a seperate IP - in the hope that the stream from the database servers would now get split across the 2 "doors".

       

      Somehow the 2nd IP - while pingable - seems to route all traffic through the first interface.

       

      Maybe we didn't do something right. We are double checking - but thoughts are welcome.

       

      Does this sound like it should work? Allowing us to push up to 20Gbit into the flat file server?

       

      I guess part of the question is whether 8-9Gbit is a "per interface" limit or an "OS overall" limit.

       

      Thanks !

        • 1. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
          Billy~Verreynne

          Have serious doubts whether this will at all improve performance. A couple of issues that comes to mind.

           

          Can you configure an I/O load balancing algorithm for a bonded network interface, like you can with multipath and dual fibre channel HBA ports? I have not seen any such option for bonded network interfaces.

           

          TCP communication is serial. Not parallel. Packets are sequenced at destination and processed in sequence. So even if you send 500 packets near simultaneously, at the destination packet 1 needs to be processed before packet 2, and packet 2 before packet 3, and so on. So actual speed and latency are critical.

           

          Network infrastructure determines basic network performance and speed. Having 10 x 1GigE interfaces as a singe bonded interface, does not change the 1GigE network into a 10GigE network.

           

          QoS classes on routers determine routing priority and bandwidth. If enabled, a 100 interfaces bonded for performance will mean diddly-squat.

           

          Networks are shared media. It may be 10Gb/s - but that does not mean your network application process is able to claim the entire 10Gb/s for itself.

           

          If you want serious network performance, with low latency, there is only one answer. And it is the same answer as for the NYSE where microsecond (1,000,000th of a second) latencies are critical.

           

          And the answer is Infiniband. HCA (dual port) cards provide QDR (quad data rate) speeds of 40Gb/s. Running into 2 redundantly configured Infiniband switches. You configure the dual HCA ports as network interfaces on Linux, and slap a bonded interface (for redundancy on top). You can run IP (called IPoIB) over it. And to make that really fast, you link your network applications to using SDP (Socket Direct Protocol) - which eliminates all the IP stack call overheads.

           

          For a file server - you can make that an intelligent storage server. So instead of NFS or using FTP/SCP, you share the raw disks of the server on the fabric layer of Infiniband. This enables other servers to map these as local scsi devices and use Scsi RDMA (Remote Direct Memory Access) Protocol (SRP) to read and write directly to that remote disk.

           

          Facts:

          - Infiniband is not expensive (may even be cheaper than 10GigE)

          - Infiniband is the #1 Interconnect technology amongst the 500 fastest supercomputer clusters in the world

          - Oracle uses Infiniband in their Exadata Database Machine

           

          So it is proven, robust and mature technology.

           

          For additional info:

          https://nysetechnologies.nyx.com/data-technology/data-fabric-6-0

          http://www.oracle.com/technetwork/server-storage/networking/documentation/o12-020-1653901.pdf

          • 2. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
            Catch-22

            A TCP/IP address is just a logical construct. Similarly, if you have a couple of mailing addresses, your mail won't arrive faster. In order to utilize 2 network cards and have them work as a team to increase performance, you have to configure network bonding with link aggregation. You will also need a managed network Switch to configure such an option. There are many configuration examples if you Google for RHEL link aggregation.

            • 3. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
              user5749629

              Thank you for the information. The key is whether a 10Gbit stream coming into interface #1 from one DB server, and a second 10Gbit stream coming into interface #2 from a second DB server will be additive - or the TCP bottleneck you describe is on the Kernel level - and the sum will still not exceed 10Gbit, no matter what.

               

              We configured a second 10Gbit bond on the server - but have been unable to get both to work simultaneously - let alone additively. Each has its own IP - but only 1 seems to serve requests to either IP - at a time.

               

              I saw discussions about "arp_filter" and the Kernel making dynamic choices as to which one to use - instead of the one explicitly requested. So at this point we are still researching how to get both interfaces to work concurrently to begin with.

               

              Infiniband is definitely in our future - but for now this is just an attempt to squeeze more out of the current hardware.

               

              We'll see what we discover. Thanks again.

              • 4. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
                user5749629

                Thank you for the suggestion. Researching - seems like this requires a switch configuration which is not currently permitted - but sounds like this would certainly have been the easiest quick solution. Thanks again.

                • 5. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
                  Catch-22

                  You can still configure network bonding to do load balancing, but this will only increase performance if you have multiple connections from different clients.

                  • 6. Re: assigning multiple IPs to multiple 10Gbit interfaces to increase performance
                    user5749629

                    Just wanted to report that we got this to work. We are able to Data Pump 20Gbit/sec into a flat file server thru 2 x 10Gbit connections with 2 different IPs - from multiple client DBs.

                     

                    The following were key -

                     

                    (1) The recently released UEK3 with its mature pNFS

                    (2) Using different subnets for the 10Gbit interfaces to force Linux to keep transmissions flowing into the "door" they were pointed at

                    (3) Ensuring that the 10Gbit interface cards are placed on motherboard slots that can handle 10/20 Gbit/sec

                     

                    So - until Switch bonding and Infidiniband - the "best practice" approaches - become an option for us - this is the "simple and manual".

                     

                    The fundamental feature making this happen is Oracle Data Pump's ability to round-robin-write to multiple output directories.

                     

                    Thanks for everyone's tips and thoughts.