we upgrade a server Sun fire X4150 from Solaris 10 to 11.1, everything works perfectly.
During the installation we extract one of two disk in RAID 1 to preserve the old Operating System Solaris 10 in case of failure installing Solaris 11.
The controller RAID available using format command is:
5. c7t0d0 <Sun-STK RAID INT-V1.0-68.25GB>
Now we wish to reinsert the extracted disk and mirror back the running new to the extracted one.
Shortly, Disk with new Solaris 11.1 is now ok and we wish to mirror this disk with new OS to the old one containing the old copy of Solaris 10.
I'm afraid that in case of disk insertion the mirroring could occur in the wrong direction and destroy the new disk installed.
Can you help me please ?
If the system is now running the Solaris 11.1 release, then I don't see how the system would start using the disk with Solaris 10 release. As a preventative step in case something goes wrong, make sure that the default boot device is the disk with Solaris 11.1 release (in case the system reboots accidentally).
I would attach the disk with the older Solaris 10 release. Does it contain a ZFS root file system or a UFS file system? I guess it doesn't matter, but its probably best just to remove it before you add as a mirrored root pool disk. I would just create a dummy pool and then remove it.
# zpool create -f test c0t0d0
# zpool destroy test
If this is an x4150, then the new root pool disk should have an EFI disk label, like this:
# zpool status rpool
scan: resilvered 32.1M in 0h0m with 0 errors on Wed May 8 13:13:15 2013
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
errors: No known data errors
Your Solaris 10 disk probably has an SMI (VTOC) label, but you can just attach it and the right disk label will be applied automatically, like this:
# zpool attach rpool c8t2d0 c0t0d0
Make sure to wait until resilver is done before rebooting.
You will see a (new) FMA message that the disk is DEGRADED, but that is only because it is still resilvering the data.
Yes, you can be sure in this case. The only caution is that if the system BIOS still has the default boot device as the Solaris 10 boot disk, and the system reboots, then it will boot the system back to the Solaris 10 release. Some customers like have to Solaris 11 on one disk and Solaris 10 on another disk for a transition period. As long as each OS points to similar non-root (data) file systems on disks in the /etc/vfstab, for example, all should work well.
thanks for your answer, last question:
If we don't have any utility on runnnig (Solaris 11) system , how can we check the progress of re-sync after disk re-insertion ?
I tried "raidctl" CLI but it doesn't work.
How can I be sure if and when re-sync operation has succesfully completed ?
# zpool status rpool
status: One or more devices is currently being resilvered. The pool will
continue to function in a degraded state.
action: Wait for the resilver to complete.
Run 'zpool status -v' to see device specific details.
scan: resilver in progress since Tue May 21 08:52:21 2013
200M scanned out of 41.2G at 11.1M/s, 1h3m to go
199M resilvered, 0.47% done
Thanks for your answer but How can I be sure regarding resilver completed ?
There is no cli utility for solaris 11 to inspect Sun-STK RAID INT-V1.0 activities. "raidctl" seems doesn't work on solaris 11.
Have you any idea ?