2 Replies Latest reply on Jan 16, 2013 6:05 AM by Billy~Verreynne

    RAC configuration advise, please

      We have a .net app with a peak of maybe 6000 sessions. We currently use EE 11.1 running on RHEL5, with 32GB of memory. I've set up data guard for a standby, which I think makes more sense as cheap HA. But I'm a RAC newbie, so I'm biased :). My boss is considering going Oracle RAC, which is great. But of course (no surprise) its a bit expensive. He wants to trim the cluster down to two only nodes with just 1 processor each to keep the costs down. The servers would each have 36GB of memory. Also my boss wants to keep using SCSI disks; (8) 10k drives. I'm concerned that we're going to have issues.. Doing some research, it looks like dual processor systems would possibly be more stable, or maybe 3 single processor servers in the cluster? I keep hoping we'll get a small storage array, since that looks easier to config for ASM. I can't see wasting time to make a pig fly.. And I want this to work, since I'll get the calls. Can someone share what they know/use for a decent "small" RAC config? or is there a "don't do this" doc somewhere on the net to read?


        • 1. Re: RAC configuration advise, please
          Hemant K Chitale
          A 2 node-RAC with 1 processor on each node ..... I very much doubt if this can support 6000 sessions.

          RAC and DataGuard are not "either/or". They are for two different purposes. RAC is more for scalability (and less about HA) while DataGuard is about Site Failure and DR. However, DG can be setup such that a DG site/server is used as a Reporting environment.

          You do need to licence the processors / users on the DG instance.

          Hemant K Chitale
          • 2. Re: RAC configuration advise, please
            The two most critical components for RAC are:
            a) Interconnect
            b) Shared Storage

            This determines fundamental scalability (and performance). Does not matter how many more CPUs and GBs of RAM are thrown at the cluster.

            Interconnect should consists of 2 x 40Gb/QDR Inifiband switches (or 2 x 10Gb GigEthernet switches). Cost depends on number of nodes in the cluster and the number of ports per switch (a cluster node will have a dual port card connecting it to both switches).

            The Shared Storage is relatively expensive when bought off-the-shelf. Rolling that on your own - complex, risky and without vendor support.

            RAC is not cheap. It is not meant to be cheap. Cheap is a mickey mouse share-little-to-nothing mySQL based cluster. Like comparing a souped up production car with a F1GP car - very little compare besides the fact that both have an engine, 4 tires, and a steering wheel.

            Going cheap RAC is shooting yourself in both feet. With a 12 gauge shotgun. Until only bloody stumps remain. ;-)

            This however does not mean that RAC is horrible expensive. A good option that compares very favourable to building a custom RAC with the right kit, is Oracle's Database Appliance. If you want to do cluster h/w certified for RAC at an excellent price, use it with little effort and no clustering experience, and basically wheel in the cluster and an hour later start using your database - then Database Appliance is IMO the best choice.