On my system, I have one disk which is root of UFS file system. I have inserted one new disk for migration. Here is my format command output. 

 

bash-3.2# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

       0. c0t0d0 <DEFAULT cyl 39159 alt 2 hd 255 sec 63>

          /pci@0,0/pci8086,2829@d/disk@0,0

       1. c0t2d0 <DEFAULT cyl 39160 alt 2 hd 255 sec 63>

          /pci@0,0/pci8086,2829@d/disk@2,0

Specify disk (enter its number):

 

Step 1 I need to create one partition on new disk

 

bash-3.2# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

       0. c0t0d0 <DEFAULT cyl 39159 alt 2 hd 255 sec 63>

          /pci@0,0/pci8086,2829@d/disk@0,0

       1. c0t2d0 <DEFAULT cyl 39160 alt 2 hd 255 sec 63>

          /pci@0,0/pci8086,2829@d/disk@2,0

Specify disk (enter its number): 1

selecting c0t2d0

[disk formatted]

FORMAT MENU:

        disk       - select a disk

        type       - select (define) a disk type

        partition  - select (define) a partition table

        current    - describe the current disk

        format     - format and analyze the disk

        fdisk      - run the fdisk program

        repair     - repair a defective sector

        label      - write label to the disk

        analyze    - surface analysis

        defect     - defect list management

        backup     - search for backup labels

        verify     - read and display labels

        save       - save new disk/partition definitions

        inquiry    - show vendor, product and revision

        volname    - set 8-character volume name

        !<cmd>     - execute <cmd>, then return

        quit

format> fdisk    

No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the

partition table.

y

format> p

PARTITION MENU:

        0 - change `0' partition

        1 - change `1' partition

        2 - change `2' partition

        3 - change `3' partition

        4 - change `4' partition

        5 - change `5' partition

        6 - change `6' partition

        7 - change `7' partition

        select - select a predefined table

        modify - modify a predefined partition table

        name - name the current table

        print - display the current table

        label - write partition map and label to the disk

        !<cmd> - execute <cmd>, then return

        quit

partition> p

Current partition table (original):

Total disk cylinders available: 39159 + 2 (reserved cylinders)

Part      Tag Flag     Cylinders         Size            Blocks

  0 unassigned    wm 0                0 (0/0/0)             0

  1 unassigned    wm 0                0         (0/0/0)             0

  2 backup    wu       0 - 39158      299.97GB    (39159/0/0) 629089335

  3 unassigned    wm 0                0         (0/0/0)             0

  4 unassigned    wm 0                0         (0/0/0)             0

  5 unassigned    wm 0                0         (0/0/0)             0

  6 unassigned    wm 0                0         (0/0/0)             0

  7 unassigned wm       0                0         (0/0/0)             0

  8 boot    wu       0 - 0        7.84MB    (1/0/0)         16065

  9 unassigned    wm 0                0         (0/0/0)             0

 

partition> 0

Part      Tag Flag     Cylinders         Size            Blocks

  0 unassigned    wm 0                0         (0/0/0)             0

 

Enter partition id tag[unassigned]:

Enter partition permission flags[wm]:

Enter new starting cyl[0]:

Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: $

partition> l

Ready to label disk, continue? y

partition> q

format> q

 

Step 2 Now, I need to create rpool with new disk.

 

bash-3.2# zpool create -f rpool c0t2d0s0

bash-3.2# zpool status

  pool: rpool

state: ONLINE

scan: none requested

config:

 

        NAME        STATE     READ WRITE CKSUM

        rpool       ONLINE       0 0     0

          c0t2d0s0  ONLINE 0     0     0

 

errors: No known data errors

 

Step 3 On my system, I don’t have any boot environment. So, I am going to create new boot environment with rpool (new disk)

 

bash-3.2# lustatus

ERROR: No boot environments are configured on this system

ERROR: cannot determine list of all boot environment names

 

bash-3.2# lucreate -c sol_stage1 -n sol_stage2 -p rpool

Determining types of file systems supported

Validating file system requests

Preparing logical storage devices

Preparing physical storage devices

Configuring physical storage devices

Configuring logical storage devices

Checking GRUB menu...

Analyzing system configuration.

No name for current boot environment.

Current boot environment is named <sol_stage1>.

Creating initial configuration for primary boot environment <sol_stage1>.

INFORMATION: No BEs are configured on this system.

The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.

PBE configuration successful: PBE name <sol_stage1> PBE Boot Device </dev/dsk/c0t0d0s0>.

Updating boot environment description database on all BEs.

Updating system configuration files.

The device </dev/dsk/c0t2d0s0> is not a root device for any boot environment; cannot get BE ID.

Creating configuration for boot environment <sol_stage2>.

Source boot environment is <sol_stage1>.

Creating file systems on boot environment <sol_stage2>.

Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/sol_stage2>.

Populating file systems on boot environment <sol_stage2>.

Analyzing zones.

Mounting ABE <sol_stage2>.

Generating file list.

Copying data from PBE <sol_stage1> to ABE <sol_stage2>.

49% of filenames transferred

100% of filenames transferred

Finalizing ABE.

Fixing zonepaths in ABE.

Unmounting ABE <sol_stage2>.

Fixing properties on ZFS datasets in ABE.

Reverting state of zones in PBE <sol_stage1>.

Making boot environment <sol_stage2> bootable.

Updating bootenv.rc on ABE <sol_stage2>.

File </boot/grub/menu.lst> propagation successful

Copied GRUB menu from PBE to ABE

No entry for BE <sol_stage2> in GRUB menu

Population of boot environment <sol_stage2> successful.

Creation of boot environment <sol_stage2> successful.

bash-3.2#

 

Step 4 New boot environment has been created. So, I am going to activate the new boot environment. On next reboot OS will boot from new boot environment.

 

bash-3.2# luactivate sol_stage2

Generating boot-sign, partition and slice information for PBE <sol_stage1>

A Live Upgrade Sync operation will be performed on startup of boot environment <sol_stage2>.

Generating boot-sign for ABE <sol_stage2>

NOTE: File </etc/bootsign> not found in top level dataset for BE <sol_stage2>

Generating partition and slice information for ABE <sol_stage2>

Boot menu exists.

Generating multiboot menu entries for PBE.

Generating multiboot menu entries for ABE.

Disabling splashimage

Re-enabling splashimage

No more bootadm entries. Deletion of bootadm entries is complete.

GRUB menu default setting is unaffected

Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you

  1. reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You

MUST USE either the init or the shutdown command when you reboot. If you

do not use either init or shutdown, the system will not boot using the

target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process

needs to be followed to fallback to the currently working boot environment:

  1. 1. Boot from the Solaris failsafe or boot in Single User mode from Solaris

Install CD or Network.

  1. 2. Mount the Parent boot environment root slice to some directory (like

/mnt). You can use the following command to mount:

 

     mount -Fufs /dev/dsk/c0t0d0s0 /mnt

 

  1. 3. Run <luactivate> utility with out any arguments from the Parent boot

environment root slice, as shown below:

 

     /mnt/sbin/luactivate

 

  1. 4. luactivate, activates the previous working boot environment and

indicates the result.

 

  1. 5. Exit Single User mode and reboot the machine.

 

**********************************************************************

 

Modifying boot archive service

Propagating findroot GRUB for menu conversion.

File </etc/lu/installgrub.findroot> propagation successful

File </etc/lu/stage1.findroot> propagation successful

File </etc/lu/stage2.findroot> propagation successful

File </etc/lu/GRUB_capability> propagation successful

Deleting stale GRUB loader from all BEs.

File </etc/lu/installgrub.latest> deletion successful

File </etc/lu/stage1.latest> deletion successful

File </etc/lu/stage2.latest> deletion successful

Activation of boot environment <sol_stage2> successful.

 

Step 5 Check the lustatus

bash-3.2# lustatus

Boot Environment           Is       Active Active    Can Copy     

Name                       Complete Now    On Reboot Delete Status   

-------------------------- -------- ------ --------- ------ ----------

sol_stage1                 yes      yes no        no     -

sol_stage2                 yes      no yes       no     -

Step 6 Use init 6 command to reboot the system.

bash-3.2# init 6

 

Step 7 After OS booted up check the lustatus.

 

bash-3.2# lustatus

Boot Environment           Is       Active Active    Can Copy     

Name                       Complete Now    On Reboot Delete Status   

-------------------------- -------- ------ --------- ------ ----------

sol_stage1                 yes      no no        yes    -

sol_stage2                 yes      yes yes       no     -

bash-3.2#

 

Step 8 Now sol_stage2 (rpool) is the active boot environment. Check the zfs list command.

bash-3.2# zfs list

NAME                    USED  AVAIL REFER  MOUNTPOINT

rpool                  13.3G   280G 42.5K  /rpool

rpool/ROOT             5.15G   280G 31K  legacy

rpool/ROOT/sol_stage2  5.15G 280G  5.15G  /

rpool/dump             2.00G   280G 2.00G  -

rpool/swap             6.20G   286G 16K  -

bash-3.2#

 

Step 9 Now delete old UFS boot environment sol_stage1.

bash-3.2# ludelete -f sol_stage1

System has findroot enabled GRUB

Mar  5 15:46:24 solaris10test sendmail[610]: [ID 702911 mail.alert] unable to qualify my own domain name (solaris10test) -- using short name

Updating GRUB menu default setting

Changing GRUB menu default setting to <0>

Saving existing file </boot/grub/menu.lst> in top level dataset for BE <sol_stage2> as <mount-point>//boot/grub/menu.lst.prev.

File </etc/lu/GRUB_backup_menu> propagation successful

Successfully deleted entry from GRUB menu

Updating boot environment configuration database.

Updating boot environment description database on all BEs.

Updating all boot environment configuration databases.

 

Step 10 Next challenge is to repartition and add old UFS disk with rpool as mirror. Copy the partition table to UFS disk from rpool disk.

 

bash-3.2# prtvtoc /dev/rdsk/c0t2d0s2 |fmthard -s - /dev/rdsk/c0t0d0s2

fmthard:  New volume table of contents now in place.

 

Step 11 Attach old UFS disk as a mirror with rpool disk.

bash-3.2# zpool attach -f rpool c0t2d0s0 c0t0d0s0

Make sure to wait until resilver is done before rebooting.

bash-3.2# zpool status

  pool: rpool

state: ONLINE

status: One or more devices is currently being resilvered. The pool will

        continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

scan: resilver in progress since Sun Mar  5 15:48:08 2017

    375M scanned out of 7.16G at 15.6M/s, 0h7m to go

    375M scanned out of 7.16G at 15.6M/s, 0h7m to go

    375M resilvered, 5.12% done

config:

 

        NAME          STATE     READ WRITE CKSUM

        rpool         ONLINE       0 0     0

          mirror-0    ONLINE 0     0     0

            c0t2d0s0  ONLINE 0     0     0

            c0t0d0s0  ONLINE 0     0     0 (resilvering)

 

errors: No known data errors

 

Step 12 After resilvering has finish zpool status will be like this.

 

bash-3.2# zpool status

  pool: rpool

state: ONLINE

scan: resilvered 7.15G in 0h3m with 0 errors on Sun Mar  5 15:51:34 2017

config:

        NAME          STATE     READ WRITE CKSUM

        rpool         ONLINE       0 0     0

          mirror-0    ONLINE 0     0     0

            c0t2d0s0  ONLINE 0     0     0

            c0t0d0s0  ONLINE 0     0     0

 

errors: No known data errors

bash-3.2#