This discussion is archived
6 Replies Latest reply: Dec 11, 2012 2:57 PM by 903009 RSS

S11.1 - Move rpool To New SSD - Some Difficulty [SOLVED]

903009 Newbie
Currently Being Moderated
Hello everyone! Thanks for reading.

I'm attempting to move my existing rpool from a mirror to a new SSD. (On a new HBA on the same system, which works a treat.)

I'm attempting to follow Chapter 11 Archiving Snapshots and Root Pool Recovery

I'm having a little trouble in that I've run Step 4. of "Recreating Your Root Pool And Recovering Root Pool Snapshots"
# zpool create -B rpool2 c2t2d2
Which sent a 49G snapshot to the drive. "rpool2/ROOT/solaris-ssd@date-time"
So far, so good.

Now...

Step "5. Mount the file system that contains the snapshots from the remote system."
mount -F nfs ...
Huh? What is this about? What about the 49 gigs of rpool snapshot (and Grub2) I just sent in Step 4.?







Help me out here. I now seem to need a NFS server to zfs send from a dev on my S11.1 across a network and back to another disk on the same machine?

Where have I lost the script, so to speak?

Edited by: 900006 on Dec 11, 2012 3:00 PM
  • 1. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    cindys Pro
    Currently Being Moderated
    If you want to move your rpool to a new disk, then just attach the new disk,
    let the new disk resilver, test booting from the newly attached disk, and detach
    the old disk.

    Is this what you want? If so, then follow these much easier steps:

    http://docs.oracle.com/cd/E26502_01/html/E29007/gjtuk.html#ghzvz
    How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC)

    http://docs.oracle.com/cd/E26502_01/html/E29007/gjtuk.html#gmcca
    How to Replace a Disk in a ZFS Root Pool (x86/EFI (GPT))

    Before you do the SSD replacement, review this bug:

    https://bug.oraclecorp.com/pls/bug/webbug_edit.edit_info_top?rptno=15826358
    Massive write slowdown on random write workloads due to SCSI unmap

    The workaround is this:

    In /etc/system add the following line:

    set zfs:zfs_unmap_ignore_size=0


    Thanks, Cindy
  • 2. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    903009 Newbie
    Currently Being Moderated
    Wow. Thank you very much for your very fast and very concise reply. Very cool.

    I'll give it a thorough reading before I continue.

    Again, thank you.
  • 3. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    cindys Pro
    Currently Being Moderated
    You should only have to follow the archiving and restoring steps, when you need to do bare metal recovery.
    One exception is that you should be doing the archiving steps to a remote system as part of basic root pool
    backups in case of bare metal recovery and then hope that you never need to do full recovery.

    As long as a root pool disk is intact, you can attach, detach, replace and so on while the system is running and
    everything is online. ZFS (should) make most management tasks easier.

    Thanks, Cindy
  • 4. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    903009 Newbie
    Currently Being Moderated
    Thanks Cindy, I think you've already helped a great deal by pointing me to the correct approach(es).

    Although I couldn't open the link to the bug, I added the line to /etc/system, at the bottom of the file, preemptively.


    The existing rpool is on a SAS mirror on an (old and slow) onboard controller.
    The SATA SSD is on a newer fast HBA via a breakout cable.
    There is a SSD running a dedup pool of nonglobal zones running VirtualBox instances, and a RAIDz1 storage pool cached by another SSD, currently on the new HBA.


    Since the rpool mirror was upgraded from S11/11, I think I may have to:
    add an SMI label and GRUB before I can bring the SSD into the mirror as bootable,
    and then resilver,
    and then zpool split the mirror,
    then destroy the old mirror (I'll just shutdown and pull the disks at first ;) ),
    then rename the rpool2 on the SSD that resulted from the split to rpool
    then tell BIOS (or at least make sure it knows it can boot from the SSD in question),
    then boot and enjoy

    Is this correct?



    On edit:

    Can I just:

    1. Add and resilver in the new SSD into the existing mirror,
    2. pull the SAS drives
    3. destroy the mirror
    4. profit
    ???


    Or...since I can't read the details of the bug...is it better use of resources to leave rpool on the SAS mirror and simply use the SSD as a cache device?


    .

    Edited by: 900006 on Dec 10, 2012 4:30 PM
  • 5. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    cindys Pro
    Currently Being Moderated
    I would agree that the better approach is to leave the SSD for pools that could
    take advantage of a performance enhancement, rather than using it for rpool.

    I think you are also correct that if you have upgraded from Solaris 11, then you
    would need to add a VTOC label to the SSD and creating a slice 0 before you
    attach it to the existing rpool.

    Splitting and renaming the root pool is too much trouble. You should be able
    to attach a new disk or replace an existing root pool disk and let the rpool
    contents resilver automatically. We do this all the time, if an existing rpool disk
    is too small.

    Make sure you have done the math with dedup to determine that the your
    data is dedupable and you have enough memory to support it.

    http://docs.oracle.com/cd/E26502_01/html/E29007/gazss.html#gjhav

    Thanks, Cindy
  • 6. Re: S11.1 - Move rpool To New SSD - Some Difficulty Following The Documentation
    903009 Newbie
    Currently Being Moderated
    Thanks again for the pointers, and the metaphysic.
    I was complicating a thing made to be simple. :D
    You're right - a simple observation of activity lights shows that my rpool drives aren't hit very often. It will take a long long time to 'notice' anything other than a fast boot and scrub.

    And thank you for the dedup warning.
    memstat shows 49% of RAM devoted to ZFS after 31 days of uptime.
    zonestat shows only about 16GB phys used of 32GB available.
    <8GB to maintain the current dedup tables (1.58 ratio) seems about right under the "32GB/TB" rubric I've encountered regarding ZFS deduplication RAM allowance for storage.
    Although it's all running from RAM right now, it is a fact that I could do most things done with those VMs much more efficiently in Zones on pools without dedup using native Solaris applications. That is the plan.
    In the future, the dedup pool will help manipulate LiDAR and other point cloud data. Which may prove to be a much better use the power of ZFS dedup.

    Solaris really is a brilliant OS.
    Thanks again for your patient help, Cindy. ZFS will solve the rpool question itself at such time as it actually needs solving. :D
    I went ahead and used the SSD in question to properly mirror that dedup pool and run a 60-second scrub. It took less than 4 minutes of my time from start to finish. LOL, I don't touch-type. :D

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points