Its for X4-2 1/8 Rack machine.
If you are using RAC for you production (both compute nodes) you can't segregate at network level your Exadata 1/8th.
As far as I know Oracle does't support several clusterwares running the same machine.
The "exotic" solution for you is to run production as RAC-"One node" on a one db node and test/uat on another.
This way you'll be able to configure each db node on different public network but you will loose redundancy provided by RAC.
Anyway you still need to share the storage layer between prod and test/uat because 3 cells are the minimum configuration for storage redundancy.
The approach you are trying to implement is usually used on larger Exadata configurations (full racks) because there is enough db/cells available to build several RAC clusters.
do you have any MOS note id. which support the above statement.
This would be a pretty extreem solution for a X4-2 1/8 Rack machine. You have only 2 db nodes and I don't think it is possible to make a stable configuration on those 2 nodes.
There is no way you can do this as there is only one DBFSDG where the ocr and vote will reside and so you cannot setup two crs following any standard exadata procedure.
We did this at a company I worked for previously using an Exadata 1/2 rack. We essentially divided the 1/2 rack into two 1/4 racks - but giving one cluster the extra storage cell. So these were two physically different clusters: the first cluster had 2 compute nodes and 4 four storage nodes and the other 2 compute nodes and 3 storage nodes were in the second cluster. This also creates all separate grid disks, i.e. each cluster has its own DATA, RECO and DBFS group, so the point above is not an issue. The network is completely isolated as well, e.g. the private infiniband network, client network, etc. is all separate for the two clusters - similar to what you are wanting.
However, I don't see any good way of doing this for an 1/8 rack. Because you only have 2 compute nodes and 3 storage cells to begin with - there's not enough to split and keep the HA and performance features.
I have seen a setup where a 1/2 rack was spilt in a 3 db node (prd) / 1 db node (tst) configuration with a separated crs setups.
The 7 cells actually were configured with 6 disk groups. three for prd and three for tst.
The tst db node was configured with crs like the three prd nodes.
So if you don't need HA and only have to segregate at the db node network level and can share access to the cells on the private network this could be your solution.