I'm looking for some real life (dev, test, or production) examples of how a full rack is used. I'm looking for information in terms of # of distinct databases, rac or not & how many nodes, sga sizes, admin vs. policy managed, etc.
We need to consolidate in upwards of 16 databases on to a full rack. Many are currently 2 node rac for HA purposes. Some have pretty small resource requirements. The largest SGA is 45G and they shrink from there. Initially we are not looking at any consolidation at the database level (combining databases).
I struggle with the amount of memory available across all the servers. Each compute node has 256G of RAM. If we put our 3 largest databases on a single compute node we would still have over 100G left over (before pga, non db processes, etc). This leaves an enormous amount of memory available across the rest of the full rack. In practice we wouldn't do this but let's say we spread them out, then a single compute node might need in the vicinity of 80G at most and probably a lot less. What do you do with the other 170+? Or the 200+ on compute nodes which house smaller databases?
How do you have your databases laid out? Multiple singletons spread out, 2-3 node clusters here and there for the bigger/more important things? All the single DBs as RacOneNode? Or big 8 instance DBs?
Are you actively trying to spread out the load across all nodes, or as long as you meet requirements are you just leaving some things underutilized?
I'm looking for some real life (dev, test, or production) examples of how a full rack is used.
>> In Realtime, we will have different RACKs for Prod and Non Prod.
I'm looking for information in terms of # of distinct databases, rac or not & how many nodes, sga sizes, admin vs. policy managed, etc.
>> Can you please specify the model ? X2-2/X2-8 OR X3-2/X3-8; Reason for this is In X2-8/X3-8 series Full Rack we will have 2 compute nodes whereas X2-2/X3-2 Full Racks will have 8 compute nodes. As you have mentioned, each compute node with 256GB of physical memory, i assume you are referring X2-2 or X3-2 Full Rack.
Rest of the portion of your query
>> As it is Full Rack total physical memory will be 2TB. You are planning to have 16 DBs and they are only 2 node Rac DBs.
In such case if you can plan the databases which needs 45GB of SGA to be on 01/02; 03/04; 05/06; compute node wise.
You may use 07/08 compute nodes for your Non Rac databases, and remaining RAC databases you may plan to have between 01-06 as resources will still be free there.
If you can provide How many RAC & Non RAC databases you are planning on Full Rack along with SGA size of each will give us more clarity.
I can share some general guidelines which I normally use when moving the databases to the Exadata. Feel free to add yours or suggest any improvement:
1) It's good idea to consider the FULL RACK as a hardware pool, i.e. a collection of hardware resources to host the databases.
2) Though its very rightly highly recommended to have different racks for dev, qa, and prod; still many companies use their FULL rack to host all of them. So if its management decision then so be it and Exadata enables us to host this kind of mixed types databases. But keep in mind that in such set up conflicting HA and critical requirement will always remain there.
3) As there are RAC databases involved, so make sure that voting disks have sufficient number of failgroups available.
4) Its very helpful and realistic to extrapolate the resource requirements such as IO, CPU, memory before migration to Exadata.
5) Always keep room for resource growth and any other requirement such rolling upgrades or any future conversion of single instance to RAC.
6) Its also a good idea to keep 25% of resources extra just in case.
7) Space becomes an important factor if you plan to go from normal to high redundancy in the ASM.
8) In mixed databases type consolidation, its imperative to use the DBRM and IORM to ensure that critical production database always get priority and resources.
9) It is also recommended by Oracle to set the number of shared memory segments (kernel.shmmni) greater than the number of databases. Oracle has also recommendation for other parameters in MOS and at their site. Oracle also recommends following for database memory settings:
SUM of databases (SGA_TARGET +PGA_AGGREGATE_TARGET) + 4 MB * (Maximum PROCESSES) < Physical Memory per Database Node
SUMof databases (SGA_TARGET + 3* PGA_AGGREGATE_TARGET) < Physical Memory per Database No
.10) Monitoring and tweaking according to your requirements and load is the key. There is no one fit-for-all formula.