Nik is correct, but to give some context around why you're getting the issue, it's because we don't support EFI in rpools. If you do not supply the slice number to the device when you create zpools or attach vdevs, ZFS will use the entire disk and as such apply an EFI label to the device. By specifying a slice, you're telling ZFS that you want to keep the current SMI/VTOC label (which must be correct prior to using the disk). ZFS won't copy the label from other disks within the pool.
I applied all those steps mentioned in this thread and getting below error while mirroring zfs.
ggntestmirr: /\> zpool attach -f rpool c0d0 c0d1
cannot label 'c0d1': EFI labeled devices are not supported on root pools.
ggntestmirr: /\> zpool attach -f rpool c0d0s0 c0d1s0
cannot attach c0d1s0 to c0d0s0: new device must be a single disk
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0d0s0 ONLINE 0 0 0
errors: No known data errors
ggntestmirr: /\> format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
1. c0d1 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
Specify disk (enter its number):
As per Nik's Step 3 you MUST specify slice 0 (s0). By specifying s0 you tell ZFS to use the SMI label instead of the EFI which happens when you provide the entire disk. Look at your 'zpool status -v' output and you'll see that it shows "c0d0s0".
Because you issued "zpool attach -f rpool c0d0 c0d1" first, ZFS would have put an EFI label on the c0d1 disk which is why the second command failed. Copy the SMI label from c0d0 to c0d1 and use "zpool attach -f rpool c0d0s0 c0d1s0" instead.
As you're using a virtual environment (LDOMs??) this could very well be Bug ID 6852962 - "zpool attach on root pool in a guest LDOM fails with cannot attach new device must be a single disk". The bug is fixed in Kernel Patch 142909-17 or higher. What Kernel rev does your virtual environment have? (uname -a)
141444-09 is below 142909-17 where the issue is fixed. Try patching the system using the Solaris Patchset downloadable from My Oracle Support and see if you have the same issue after patching. See Doc ID 1273718.1 for step by step instructions for locating the Patchset.