6 Replies Latest reply on Mar 15, 2013 7:01 PM by djwillia-Oracle

    RAC hardware reuirements ?

      Can any body tell me... what are the hardware requirement for 2 instance Oracle 11g RAC ??

      I have gathered below information, please correct me, If I am wrong.

      - Database Server. ( where database is installed)
      - Instance one server
      - Instance two server
      - Storage Server.

      Should Instance one and Instance two must be same ??

      Thanks in Advance.
        • 1. Re: RAC hardware reuirements ?
          Paul M.
          Not sure to understand what you mean, but a 2 instances RAC requires two machines and a shared storage, nothing else.
          • 2. Re: RAC hardware reuirements ?
            • 3. Re: RAC hardware reuirements ?
              You need to also check

              Public,Private,SCAN,Virtual IP's
              • 4. Re: RAC hardware reuirements ?
                oracleRaj wrote:
                Can any body tell me... what are the hardware requirement for 2 instance Oracle 11g RAC ??

                I have gathered below information, please correct me, If I am wrong.

                - Database Server. ( where database is installed)
                - Instance one server
                - Instance two server
                - Storage Server.
                You need a shared storage system, such as a SAN. There are alternatives such as NAS. Storage should be redundant. Does not help you have an awesomely redundant RAC architecture and the data winds up on a single disk on a single storage server - as any storage error/failure will mean the RAC also fails.

                You need RAC database servers, aka RAC cluster nodes. Each node will run a database engine (called an instance). Each node needs to see the SAME storage used for the database. So RAC is a single physical database (on shared cluster storage), with many database instances (one instance per server node).

                These db server nodes obviously need to connect to the storage system. If a SAN, it means each server needs a HBA (Fibre channel) PCI card and cables to connect to the SAN (typically to a SAN fibre channel switch). HBAs usually have 2 ports - which means each server has 2 I/O paths to the SAN. You would want to wire that to 2 different switches, so that a cable/port/switch failure will only result in loosing 1 I/O path, thus providing redundancy.

                If NAS type storage is used, the db servers will connect via IP to the storage server. You should have a dedicated and private high-speed network between db servers and storage server(s). Bad idea to use a public/office network - where someone web surfing youtube causes a huge I/O performance drop for your RAC. This private network needs its own private switches. And like the SAN setup, you would want 2 NICs wired to 2 switches and then to the storage server(s) for redundancy. Cable/port/switch failure and the RAC will not even notice.

                The last component needed is a high-speed and low latency private cluster communication backbone, called the Interconnect. Each database server node needs to connect to this Interconnect. And no surprise there, you also want dual port cards per node, and dual switches, for the Interconnect - so that a cable/port/switch failure will not impact the cluster.

                The recommended Interconnect is Infiniband QDR (Quad Data Rate at 40Gb). This is what Oracle uses for the Exadata Database Machine. This uses a HCA PCI card (dual ports) per server. And needs Infiniband cables and switches. This is not very expensive and could be cheaper than high-speed Ethernet.

                The alternative to Infiniband is high-speed Ethernet, aka Gigabyte Ethernet at 10Gb. This is more common generally - but is also not the primary choice for clusters. The chosen (most used) Interconnect architecture used by the fastest and biggest 500 computer clusters in the world, is Infiniband.

                As you need an Interconnect, and a second high-speed private network as the I/O fabric layer when using NAS, you can consider combining this into a single high-speed network.

                Infiniband is ideal for this as it has a number of specialised protocols specifically to support this. RDS (Reliable Datagram Sockets) is an Interconnect protocol developed by Oracle specifically for RAC over Infiniband. RDMA (Remote Direct Memory Access) is a protocol for "sharing server memory" over Infiniband. SRP is the Scsi Rdma Protocol - running a scsi interface over RDMA.

                RAC is not designed to run on a couple of desktops with a simplistic NFS type server over a 1Gb public network that serves also as Interconnect and I/O fabric layers. Yes, this can be done. But do not expect RAC's awesome performance, scalability, redundancy and high availability, when running RAC on poor hardware and poor network achitecture.
                • 5. Re: RAC hardware reuirements ?
                  please refer below links...



                  • 6. Re: RAC hardware reuirements ?
                    Hi --

                    You don't say what platform you want to install on, so I can't send you directly to it, but maybe that is just as well. There is an answer to all of these questions, in the documentation set:


                    Keep in mind that we assume, since you can't install Oracle RAC before you install Oracle Grid Infrastructure, that you really care to find out about required hardware when you are doing your Oracle Grid Infrastructure Installation, because you can't install Oracle RAC on some other cluster software. So to find the answer to your specific question, you will want to go to the Oracle Grid Infrastructure install document for your platform. You will find checklists and chapters to help you answer your question.

                    Let us assume that you want to install on Linux. If that is the case, then here is a good place to start:


                    Hope that helps....