all servers in a cluster should have the same hardware characteristics. Take either 2 servers with 2 sockets or 4 servers with 1 socket. Most common setup will be 2 servers for this due to cost.
The more server you have, the more complex does your environment get. You will need 4x ram for the servers instead of 2x, 4-8x HBAs to san instead of 2-4, 4-8 network cards,etc. Add up your hardware cost and your additional cost for management and then decide if you need this.
What business requirement or problem are you trying to address using SE RAC?
Personally, I would not bother. It has some very serious restrictions in today's h/w environment where a single server typically has 4 cores as the minimum.
Server nodes in a RAC enables one to scale server resources (adding more CPUs and memory for db processing), and scale the I/O layer (adding additional I/O pipes to storage network or array).
An artificially imposed max of server resources (like a max of 4 cores per cluster), basically destroys any and all scalability that is provided by RAC architecture.
Performance wise - performance will likely degrade as it is more expensive (more overheads and moving parts) to run db processing across a 4 node 1xCPU each cluster, than running the same db processing on a single 4 CPU/core server.
Cost wise - you need a highspeed Interconnect (should be faster then I/O fabric layer) as cluster backbone. This requires dual ports for redundancy. Which means requiring a dual port communication PCI card (Gig Ethernet or preferably HCA) per server, together with a switch (Gig Ethernet or Infiniband), plus cables.
You also need an I/O fabric layer (HBA cards and switch), and a storage server/SAN. If "cheaper" NAS is used, there is still a requirement to separate your I/O fabric layer physically from your Interconnect, as running both over 1Gbs Ethernet is kind of stupid. This means another PCI card (HBA or Gig Ether) per server. Plus cables.
Switches are also needed in pairs for redundancy and high availability.
And what about the cost of the storage server(s)? Ideally 2 are needed for h/w redundancy and availability.
All this is not cheap. And what for? A silly max-4-core/cpu cluster that CANNOT scale and will likely perform slower than a single 4 core server!?
RAC is all about redundancy, high availability, and scalability. If these 3 are not ranked at the top of business requirements, I would not bother with RAC. Which makes SE RAC in my opinion a joke.
The sizing of RAC environment depend exclusively of requeriment of sizing of your application.
* The number of users connected on your RAC using the various application components and the data load must be taken into account to sizing your hardware.
What's the sizing of your application?
The most common setup/deploy of RAC Standard Edition is using a Cluster with 2 nodes and 2 socked per node.
Remember that in a cluster with 2 nodes, a single node must be able to support (CPU, Memory and I/O performance) full load/users of your application to support HA in case the node failure.
Thanks for all replies!
The Business requirement is a production line which gets bar codes from the database and saves back information for each product during the manufactoring process. So the need is high availibility with Minimum downtime because otherwise the production line stands still.
Firstly we evaluated to use our virtualization Cluster which Hosts a VM with a single instance. But therefore we would have to license all sockets within our virtualization Cluster (vm ware) - and that's a lot...
RAC SE was the second Option - just two physical Servers + storage.
Our third Option is Oracle fail safe. As our Default Server Environment is Windows Server 2012 Oracle fail safe is an option for us. This would be the cheapest way to get high availibility with Minimum downtime because it could also be used with Oracle Standard Edition One which would be enough for the production line Software... Further more only one server (active node) has to be licensed.