This discussion is archived
6 Replies Latest reply: Aug 30, 2012 3:03 AM by 959007 RSS

S10 x86 ZFS on VMWare - Increase root pool?

954863 Newbie
Currently Being Moderated
I'm running Solaris 10 x86 on VMWare.
I need more space in the zfs root pool.
I doubled the provisioned space in Hard disk 1, but it is not visible to the VM (format).
I tried creating a 2nd HD, but root pool can't have multiple VDEVs.
How can I add space to my root pool without rebuilding?
  • 1. Re: S10 x86 ZFS on VMWare - Increase root pool?
    bobthesungeek76036 Pro
    Currently Being Moderated
    I've not tried this before but these are the steps I would try to accomplish this task:

    cfgadm -c configure cX (where X is the controller that your rpool disk is on)

    run format on the disk, type -> auto -> label -> quit

    zpool set autoexpand=on rpool

    reboot.

    DISCLAIMER - Like I stated before, I've not tried this but it would seem logical that these steps would work. If anyone sees a flaw in my thinking PLEASE speak up.
  • 2. Re: S10 x86 ZFS on VMWare - Increase root pool?
    954863 Newbie
    Currently Being Moderated
    Thanks. I tried; format's Auto Configure doesn't work while the disk is in use.
  • 3. Re: S10 x86 ZFS on VMWare - Increase root pool?
    bobthesungeek76036 Pro
    Currently Being Moderated
    I bet it would work if you booted off a Solaris DVD ISO image.
  • 4. Re: S10 x86 ZFS on VMWare - Increase root pool?
    954863 Newbie
    Currently Being Moderated
    The actual objective of this effort was to enable luupgrade in a production environment and there was insufficient space to create an ABE.

    This is how I resolved it:

    · # Create a new, larger, TEMPORARY disk.

    · # Create a TEMPORARY root pool and BE on the TEMPORARY disk with a different root pool name ( I could have done luupgrade here).

    · # Boot from the TEMPORARY BE.

    · # Destroy the original disk, This could complete the effort, except that we wanted to sustain a naming convention

    · # Create a new disk with the larger size

    · # Create a new root pool and BE from the active one, but with our standard names

    · # Upgrade the patch level of thenew BE

    · # Boot from the new disk

    · # Destroy the old Disk
  • 5. Re: S10 x86 ZFS on VMWare - Increase root pool?
    cindys Pro
    Currently Being Moderated
    True, you can't have non-redundant rpool devices, but you can have mirrored rpool devices, which
    means you can use the zpool attach and detach features to attach the larger disk and detach the
    smaller disk.

    I'm unfamiliar with VMWare but next time you might consider these easier steps:

    1. Create a larger disk

    2. Attach the larger disk to the existing (smaller) root pool disk

    3. Let the new disk resilver

    4. Detach the smaller disk

    This process is described here:

    http://docs.oracle.com/cd/E23823_01/html/819-5461/ghzvz.html#ghzvx

    Thanks, Cindy
  • 6. Re: S10 x86 ZFS on VMWare - Increase root pool?
    959007 Newbie
    Currently Being Moderated
    Hi,

    This is what I did in single user (it may fail in multi user):
    -> format -> partition -> print

    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)

    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1301 9.97GB (1301/0/0) 20900565
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wm 0 - 1301 9.97GB (1302/0/0) 20916630
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0

    -> format -> fdisk

    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks

    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1304 1304 83

    -> format -> fdisk -> delete partition 1
    -> format -> fdisk -> create SOLARIS2 partition with 100% of the disk

    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks

    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1565 1565 100

    format -> partition -> print

    Current partition table (original):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)

    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0

    -> format -> partition -> 0 cyl=1 size=1562e

    Current partition table (unnamed):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)

    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 1 - 1562 11.97GB (1562/0/0) 25093530
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0

    -> format -> partition -> label

    zpool set autoexpand=on rpool

    zpool list

    zpool scrub rpool

    zpool status



    Best regards,

    Ibraima

Incoming Links

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points