2 Replies Latest reply: Jul 2, 2012 2:47 AM by Billy~Verreynne RSS

    Oracle Cluster private Interconnect

    ghd
      What are the different speeds and technologies that we can configure the oracle private interconnect for RAC 11g?
        • 1. Re: Oracle Cluster private Interconnect
          Osama_Mustafa
          hi

          check these documents

          RAC: Frequently Asked Questions [ID 220970.1]
          Cluster Interconnect in Oracle 10g and 11gR1 RAC [ID 787420.1]


          Thank you
          Osama mustafa
          • 2. Re: Oracle Cluster private Interconnect
            Billy~Verreynne
            ghd wrote:
            What are the different speeds and technologies that we can configure the oracle private interconnect for RAC 11g?
            The recommended technology (looking at what Oracle's Database Machine uses) is QDR (Quad Data Rate/40Gbs) Infiniband, using the RDS (Reliable Datagram Sockets). This provides (according to Oracle testing), a 50% faster cache-to-cache block throughput with 50% less CPU time - in comparison to using UDP as the RAC Interconnect wire protocol.

            Oracle presented these results to the Infiniband/OFED members in a presentation called Oracle’s Next-Generation Interconnect Protocol (PDF).

            The Infiniband roadmap shows that the NDR (Next Data Rate) will scale to 320Gb/s.

            There is absolutely nothing I have seen from the Ethernet vendors that show GigE matching Infiniband.

            From Top 500, listing the biggest and fastest 500 clusters on this planet, Infiniband has a 41.8% market share, in comparison with the 41.4% share of GigE.

            Compare this to 2005 (when we first got Infiniband for RAC). Back then Infiniband had a 3.2% market share. GigE had a 42.8% share. So there has been an incredible growth in using Infiniband as Interconnect - unlike GigE that has been stagnant and now is the 2nd place as top500 Interconnect family architecture.

            What is needed for using Infiniband for Oracle RAC? A HCA (Host Channel Adapter) card for each RAC server (high speed PCI cards, dual port). An Infiniband switch (2 ports per RAC server needed). And cables of course. All these are sold by most server h/w vendors. Costs are quite comparable to 10Gb/s GigE (and even cheaper) in my experience.