This discussion is archived
0 Replies Latest reply: Mar 8, 2013 2:11 AM by User368268 RSS

T4-4 servers virtual disk service for OVM Server for SPARC

User368268 Newbie
Currently Being Moderated
Hi All,

I have to install Solaris Cluster 4.1 on two T4-4 servers Guest Domains with ZFS 7320C as the shared storage. There is also a 10Gbe connectivity among cluster nodes and shared storage. I will be configuring iSCSI LUNs.

After configuring the virtual network switches for 1Gbe and 10Gbe interfaces, dladm command output is as follows:

LINK ZONE MEDIA STATE SPEED DUPLEX DEVICE
net1 global Ethernet unknown 0 unknown igb1
net9 global Ethernet unknown 0 unknown nxge5
net0 global Ethernet up 1000 full igb0
net7 global Ethernet up 10000 full nxge3
net10 global Ethernet unknown 0 unknown nxge6
net11 global Ethernet unknown 0 unknown nxge7
net3 global Ethernet unknown 100 full igb3
net8 global Ethernet unknown 0 unknown nxge4
net5 global Ethernet unknown 0 unknown nxge1
net4 global Ethernet unknown 0 unknown nxge0
net2 global Ethernet up 1000 full igb2
net6 global Ethernet up 10000 full nxge2
net13 global Ethernet up 10 full usbecm0
net14 global Ethernet up 1000 full vsw0
net29 global Ethernet up 1000 full vsw1
net31 global Ethernet up 0 unknown vsw2
net17 global Ethernet up 100 full vsw3
net18 global Ethernet up 10000 full vsw4
net37 global Ethernet up 10000 full vsw5

Please correct me if I am wrong:

1. To configure IPMP groups among 10Gbe interfaces and 1Gbe interfaces in the control domain, I need to use the virtual switch devices.

2. Regarding virtual disk devices for each LDOM, I need to provision from ZFS storage. The Guest domain on one T4-4 server will be clustered with another Guest domain on another T4-4 server. For OS provisioning of Guest Domains, should I create the virtual disk devices from a ZFS pool, file containers or use a disk slice directly. Do the Best practises recommend using file containers on each server?

Example:

root@ebsprdb1 # format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t5000CCA03C4737F8d0 <HITACHI-H106060SDSUN600G-A2B0 cyl 64986 alt 2 hd 27 sec 668> rootdisk
/scsi_vhci/disk@g5000cca03c4737f8
/dev/chassis//SYS/MB/HDD0/disk
1. c0t5000CCA03C4A0E80d0 <HITACHI-H106060SDSUN600G-A2B0 cyl 64986 alt 2 hd 27 sec 668> mirrdisk
/scsi_vhci/disk@g5000cca03c4a0e80
/dev/chassis//SYS/MB/HDD4/disk
2. c0t600144F09742CEA900005137410C0001d0 <SUN-ZFS Storage 7320-1.0 cyl 8124 alt 2 hd 254 sec 254>
/scsi_vhci/ssd@g600144f09742cea900005137410c0001
Specify disk (enter its number):

root@ebsprdb1 # zpool create vmprdb1pool c0t600144F09742CEA900005137410C0001d0
root@ebsprdb1 # zfs create -o mountpoint=/ldoms vmprdb1pool/ldoms

root@ebsprdb1 # mkdir /ldoms/disk-images
root@ebsprdb1 # mkfile 240G /ldoms/disk-images/vmprdb1.img
root@ebsprdb1 # ldm add-vdsdev /ldoms/disk-images/vmprdb1.img vol2@primary-vds0

3. I want to allocate 5 GB for the quorum disk from ZFS storage. I do not intend to use quorum disk for user data; hope the size is enough.

Regards

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points