This discussion is archived
3 Replies Latest reply: Jan 11, 2013 6:14 PM by cindys RSS

ZFS Mirror on a slice in an EFI label

933584 Newbie
Currently Being Moderated
I have some questions I'm hoping someone can clarify.

I recently did a fresh Solaris 11.1 install. It created an EFI label partition which looks like as follows:
Intel 710 SSD 100GB
# prtvoc /dev/rdsk/c8t0d0
* Dimensions:
*     512 bytes/sector
* 195371568 sectors
* 195371501 accessible sectors
Total disk sectors available: 195355117 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       30.00GB          62914815    
  1  BIOS_boot    wm          62914816      256.00MB          63439103    
  2        usr    wm          63439104       60.00GB          189268223    
  3        usr    wm         189268224        2.90GB          195355135    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         195355136        8.00MB          195371519
I added slices 2 and 3 for use as cache and ZIL slices on the tank pool that is SAS Magnetic drives.
I then added another Intel 710 SSD drive (c8t1d0) and used format -e to create the EFI partition and then create the slices to be identical to the c8t0d0 drive. (So I could mirror the rpool, mirror the ZIL and stripe the cache)

I noticed that the reserved sectors where different on this freshly created EFI label than the first disk. Disk geometry is the same, the reserve is the same size, just using different sectors.
Intel 710 SSD 100GB
# prtvoc /dev/rdsk/c8t1d0
* Dimensions:
*     512 bytes/sector
* 195371568 sectors
* 195371501 accessible sectors
Total disk sectors available: 195355117 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       30.00GB          62914815    
  1  BIOS_boot    wm          62914816      256.00MB          63439103    
  2        usr    wm          63439104       60.00GB          189268223    
  3        usr    wm         189268224        2.90GB          195350015    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         195350016        8.00MB          195366399
As you will notice the reserved is in a different spot, thus forcing the slice 3 to be slightly smaller than the other disk.

So I have a few questions:

<li>Can I safely move the reserve slice on disk c8t1d0 to match c8t0d0?
<li>rpool is set to use c8t0d0s0, when I attach c8t1d0s1 is works and resilvers just fine.. But does it know to mirror the BIOS_boot slice?? I assume this is where grub is installed. I would need it if c8t0d0 dies.
<li>I'll be adding two more identical 710 drives.. If I set them up the same. Can rpool be a raid 10? or is only mirror supported?
<li> slice 0 starts at a sector divisible into 512 sector size as recommended for performance, should each slice start on a sector which is divisible into 512? what about ending sectors?

Thanks!
  • 1. Re: ZFS Mirror on a slice in an EFI label
    cindys Pro
    Currently Being Moderated
    I'm not sure I'm following everything below so I have a few questions/comments:

    1. Both of these disks look like rpool disks because of the presence BIOS_boot partition,
    although on my S11.1 system, BIOS_boot is partition 0. Non rpool disks with EFI labels
    shouldn't have this partition.

    Are these your rpool disks? If they are, then I don't see any point in adding cache and
    ZIL slices to a root pool. See 3-4 below.

    2. Yes, I think you can move the reserve partition boundary on c8t1d0 to match
    c8t0d0 as long as there is no data on c8t1d0.

    3. If you have 2 more identical drives, then consider a mirrored root pool with 2 disks
    and create another mirrored non-root pool for your data with the other 2 disks. Then,
    get some spares.

    4. I would recommend keeping your root pool small and isolated from your data.
    Root pools can only be single disks or mirrored. RAIDZ isn't support for root pools
    yet.

    5. A separate ZIL is good for logging asynchronous writes like for an NFS server.
    A separate cache device is good for caching reads and improving read performance.

    6. There's a bug that is making SSD performance slow but there is a workaround:

    Bug 15826358 - SUNBT7185015 Massive write slowdown on random write workloads due to SCSI unmap

    The workaround is this:

    In /etc/system add the following line:

    set zfs:zfs_unmap_ignore_size=0

    Carving up disks just makes administration so much harder.

    Thanks, Cindy
  • 2. Re: ZFS Mirror on a slice in an EFI label
    933584 Newbie
    Currently Being Moderated
    So, I have 8 drives currently in the system. 4 Magnetic 300gig SAS drives (tank pool, raid 10).. 4 Intel SSD 100gig (OS installed on a slice).

    You pretty much answered my question with the rpool mirror. I was hoping that I could slice up all 4 SSDs identically and be able to provide the tank pool with an raid 10 ZIL across slice 3 of all 4 SSDs, and a raid 0 ARC across slice 2 of all 4 SSDs. While using slice 0 of all 4 SSDs to raid 10 for the rpool. But since rpool can only be mirrored, then no sense wasting space with a 4-way mirror.

    Also sounds like for rpool mirroring to be bootable on all mirrors, I need to dedicate a whole disk to it.

    Anyhow instead of describing it, I posted a pic of the new setup. I appreciate any feedback, it will be used as a Percona DB server.

    Thank you much!

    edit.. link! [url http://i.imgur.com/M1TG1.jpg]My ZFS Setup

    Edited by: TomS on Jan 11, 2013 4:42 PM

    Edited by: TomS on Jan 11, 2013 4:43 PM
  • 3. Re: ZFS Mirror on a slice in an EFI label
    cindys Pro
    Currently Being Moderated
    Nice setup.

    You don't need to use the whole disk to mirror a root pool disk but that is what the installer allocated for your primary root pool disk so you'll be obliged to do similar for the mirrored root pool disk. Yes, it is one large slice 0, but administratively, you mostly refer to the whole disk, such as when you attach the disk to create a mirrored root pool.

    A mirrored pool should perform well for a DB product. We don't recommend RAIDZ for database products.

    If you need both a log and a cache device, then your setup looks doable. I know that for an Oracle DB on ZFS, we would recommend a log device for the redo log pool, but no cache device.

    I would consider using whole disks for everything because they are so much easier to manage and this is one of the advantages of using ZFS. Slicing disks is just so 20th century and I hope never to have to explain the 27 steps to partition a disk again.

    In S11.1, a root pool on a x86 system gets an EFI labeled disk by default as you observed, which means another steps toward the elimination of disk slices. Hooray.

    Lastly, stuff happens. Always have good backups.

    Thanks, Cindy

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points