Was running with rpool on a hardware mirror. One of the disks failed and we're unable to obtain a suitable replacement in a timely fashion. I attached another disk to the controller with the plan to add it to rpool, wait for resilvering, and then destroy the original hardware mirror and use the reclaimed disk in rpool, basically going from hardware mirroring to zfs mirroring.
So, I ran format and added a solaris partition to the new disk. Then, I run:
zpool attach rpool c2t0d0s0 c2t19d0s0
cannot open '/dev/dsk/c2t19d0s0': I/O error
However, 'zpool attach rpool c2t0d0s0 c2t19d0p0' works fine.
I understand that in order to boot from the new disk, I need to install grub. Grub installs fine if I use c2t19d0s0, but not c2t19d0p0.
I guess I'm just trying to understand what I'm working with here, and whether I'll end up with a bootable system this way, as I intend to remove c2t0d0s0 from rpool and recreate it with a different underlying hardware configuration.
root@hermes:~# zpool attach rpool c2t0d0s0 c2t19d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t19d0s0 overlaps with /dev/dsk/c2t19d0s2
root@hermes:~# zpool attach -f rpool c2t0d0s0 c2t19d0s0
Make sure to wait until resilver is done before rebooting.
root@hermes:~# zpool status -v rpool
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Nov 3 16:06:29 2012
11.9M scanned out of 24.0G at 871K/s, 8h0m to go
11.9M resilvered, 0.05% done
So, is the message "/dev/dsk/c2t19d0s0 overlaps with /dev/dsk/c2t19d0s2" normal? I looked at the partition table for the current rpool disk, and besides it being a different (smaller) size, the table was the same:
Current partition table (original):
Total disk cylinders available: 19327 + 2 (reserved cylinders)
It's a longstanding convention in Solaris that slice 2 is defined to be the whole disk (or, in the case of x86, the whole Solaris partition), so it's normal to have overlaps between s2 and other slices. If you really wanted to use the entire space as part of the ZFS pool you could have just use s2 without defining s0 and achieved the same result.
With 11.1 and later we recommend using GPT labeling instead, which gets rid of the two layers of partitions and slices.
The long-standing boot requirement for disk slice makes the disk replacement process more
complicated for root pools and our available disk partitions/slices also complicate trying to
just use a disk for a ZFS root pool.
The p* devices should not be used in a ZFS storage pool.
You can read about the root pool requirements here:
Ok, thanks. I have another drive of the same nominal size size that luckily is slightly larger. I need to verify that I can boot from from the curent rpool, but I think I have everything in order.. unfortunately the 2 drives dedicated to rpool are internal to the system and as such not hot-swappable. All the data is on hot swappable drives, thankfully...
Thanks for the help. Everything is back to optimal. Was able to boot from the replaced disk just fine, and added a second of the same size and attached it to rpool.
BTW, i tested a bit in a VM before doing this and noted that I didn't need to install grub on the second disk in order to boot from it after adding it to rpool and resilvering. I didn't see anything about this in the Oracle documentation but other sources via google suggested installing grub. I tried it both ways and found no difference.
True. If you attach a new second disk to the root pool disk, then the
boot info is applied automatically. If you use zpool replace to replace
a root pool disk, then you still need to apply the boot blocks automatically.
The ZFS Admin describes the zpool attach behavior here, but its
not entirely clear that zpool replace does not so I will add it: