Our customer has 2×T4-4 servers and we want to configire the following configuration:
- we want to use Solaris Cluster 3.3
- on each T4-4 server we want to have 2 guest LDOMs (application and database)
- we want that all 4 guest LDOMs are in one 4 node Solaris Cluster
- Is that configuration supported?
- when we configure quorum devices, do we have to turn off fencing for quorum devices?
I'm not 100% it is supported, but then I'm not sure it isn't supported either. It's certainly not a good idea because it doesn't buy you anything. Can the customer explain why they want to do this? What do they think they will achieve by using such a configuration? If the customer needs to split off middle tier from apps tier, then using zone clusters inside the Guest LDom cluster would be a better and supported way of achieving the separation they might be seeking.
One of the requirement is that App must not be started (or is trying to start) if Db is down. Customer is planning to solve that by configuring two resource groups (App and Db) and then configure resource group dependency.
Customer also wants to avoid using two virtualization technologies, that's why they want to use LDOMs only.
OK, so RG Affinities is what they should use.
Do they need isolation between the DB and app layers? If not, then simply put them in one single simple cluster. If they need isolation, then use Zone Clusters. Zones have a lower overhead that Guest LDoms.
You also said quorum devices in the first post - bear in mind that it is best practice to only configure 1 quorum device per cluster. Any more just lowers availability, rather than increases it.
There is a limitation which states (http://docs.oracle.com/cd/E19316-01/820-4677/ggnwx/index.html):
Fencing – Do not export a storage LUN to more than one guest domain on the same physical machine, unless you also disable fencing for that device. Otherwise, if two different guest domains on the same machine both are visible to a device, the device will be fenced whenever one of the guest domains dies. The fencing of the device will panic any other guest domain that subsequently tries to access the device.
So I guess you'd have to disable fencing on all shared devices (not just the quorum) visible to more than one guest on the same machine.
This is one of the reasons why zone clusters might be better, but again, only with a two node cluster implementation.
Tim, from the doc you quote, I would say, the config with 2 guests on the same physical box as nodes of the same cluster is supported. If not, this statement would not make sense.
Other than that, I can only second to propose zone clusters in the first place. If that is not what the customer wants, I probably would go with 4 guests, although it needs much more resources.
I agree that fencing should be turned off for all shared devices not only quorum.
But I've found also document:
That document is describing setup of two node LDOM cluster on one physical machine. In the document is stated that we should leave global fencing on, and I also don't see recommendation to switch off fencing for quorum device.
Therefore I'm a little bit confused.
At the bottom of the document you cite, there is the disclaimer that states:
"This content is submitted by a BigAdmin user. It has not been reviewed for technical accuracy by Sun
As such, the failure to note the need to disable fencing may be an oversight or just a reflection of the state of knowledge at the time of writing.
I will double check my recommendations with one of the SC engineers, but my previous posting did rely on internal mail on the subject, so I believe what I said to be correct.
I've just had confirmation from one of the Solaris Cluster engineers who is the expert in this area. He confirmed you need to disable fencing on any disks shared by more than one guest LDom co-resident on a physical node. So the paper you found is incomplete in that respect.