9 Replies Latest reply: Jan 17, 2012 5:12 AM by SteveS RSS

    ZFS root mirroring, unable to attach mirror disk

    904579
      Hi,

      Getting below error while attaching mirror root disk.

      # zpool attach -f rpool c0d0 c0d2
      cannot label 'c0d2': EFI labeled devices are not supported on root pools.

      Also tried format -e for below disks:


      0. c0d0 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
      /virtual-devices@100/channel-devices@200/disk@0
      1. c0d2 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
      /virtual-devices@100/channel-devices@200/disk@2

      labeled with SMI label. But still getting the same error.
        • 1. Re: ZFS root mirroring, unable to attach mirror disk
          Nik
          Hi.
          Please show current *zpool status rpool"

          Geberaly, rpool used sliece, not whool disk.
          So
          you need:
          1. Get VTOC c0d0
          prtvtoc /dev/rdsk/c0d0s0

          2. Create same slice on c0d2. Use format or
          prtvtoc /dev/rdsk/c0d0s0| fmthard -s - /dev/rdsk/c0d2s0

          3. Attach slice.
          zpool attach -f rpool c0d0s0 c0d2s0

          Regards.
          • 2. Re: ZFS root mirroring, unable to attach mirror disk
            SteveS
            Nik is correct, but to give some context around why you're getting the issue, it's because we don't support EFI in rpools. If you do not supply the slice number to the device when you create zpools or attach vdevs, ZFS will use the entire disk and as such apply an EFI label to the device. By specifying a slice, you're telling ZFS that you want to keep the current SMI/VTOC label (which must be correct prior to using the disk). ZFS won't copy the label from other disks within the pool.

            HTH
            • 3. Re: ZFS root mirroring, unable to attach mirror disk
              904579
              Hello,

              I applied all those steps mentioned in this thread and getting below error while mirroring zfs.


              ggntestmirr: /\> zpool attach -f rpool c0d0 c0d1
              cannot label 'c0d1': EFI labeled devices are not supported on root pools.
              ggntestmirr: /\> zpool attach -f rpool c0d0s0 c0d1s0
              cannot attach c0d1s0 to c0d0s0: new device must be a single disk
              ggntestmirr: /\>



              ggntestmirr: /\> zpool status
              pool: rpool
              state: ONLINE
              scrub: none requested
              config:

              NAME STATE READ WRITE CKSUM
              rpool ONLINE 0 0 0
              c0d0s0 ONLINE 0 0 0

              errors: No known data errors
              ggntestmirr: /\> format
              Searching for disks...done


              AVAILABLE DISK SELECTIONS:
              0. c0d0 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
              /virtual-devices@100/channel-devices@200/disk@0
              1. c0d1 <DGC-RAID5-0326 cyl 32766 alt 2 hd 64 sec 10>
              /virtual-devices@100/channel-devices@200/disk@1
              Specify disk (enter its number):
              • 4. Re: ZFS root mirroring, unable to attach mirror disk
                SteveS
                As per Nik's Step 3 you MUST specify slice 0 (s0). By specifying s0 you tell ZFS to use the SMI label instead of the EFI which happens when you provide the entire disk. Look at your 'zpool status -v' output and you'll see that it shows "c0d0s0".

                Because you issued "zpool attach -f rpool c0d0 c0d1" first, ZFS would have put an EFI label on the c0d1 disk which is why the second command failed. Copy the SMI label from c0d0 to c0d1 and use "zpool attach -f rpool c0d0s0 c0d1s0" instead.
                • 5. Re: ZFS root mirroring, unable to attach mirror disk
                  Nik
                  Hi.


                  It's look like you system work in some vrtual environment.

                  In case x86, you need create Solaris partitions on disk d1.

                  1. Use fdisk to see partition of c0d0 and after cretae same on c0d1.
                  fdisk /dev/rdsk/c0d0p0
                  See table

                  fdisk /dev/rdsk/c0d1p0
                  Create same table.

                  2. Use format for create same slices on c0d1 as c0d0.
                  You can use this command for this:

                  prtvtoc /dev/rdsk/c0d0s0 | fmthard -s - /dev/rdsk/c0d1s0

                  3. After you can attach disk to mirror.
                  zpool attach -f rpool c0d0s0 c0d1s0

                  4. You need install boot block on с0d1 for make available alternative boot device.
                  Read man installboot


                  Regards.
                  • 6. Re: ZFS root mirroring, unable to attach mirror disk
                    904579
                    Hi,

                    I again formatted my mirror disk c0d1 and tell labelled it in SMI mode and again I issues the below command:

                    ggntestmirr: $PWD\> zpool attach -f rpool c0d0s0 c0d1s0
                    cannot attach c0d1s0 to c0d0s0: new device must be a single disk


                    Here also I m getting the same error.

                    This is my sparc solaris 10 machine.
                    • 7. Re: ZFS root mirroring, unable to attach mirror disk
                      SteveS
                      As you're using a virtual environment (LDOMs??) this could very well be Bug ID 6852962 - "zpool attach on root pool in a guest LDOM fails with cannot attach new device must be a single disk". The bug is fixed in Kernel Patch 142909-17 or higher. What Kernel rev does your virtual environment have? (uname -a)
                      • 8. Re: ZFS root mirroring, unable to attach mirror disk
                        904579
                        PFA the uname -a o/p

                        ggntestmirr: /\> uname -a
                        SunOS ggntestmirr 5.10 Generic_141444-09 sun4v sparc sun4v
                        • 9. Re: ZFS root mirroring, unable to attach mirror disk
                          SteveS
                          141444-09 is below 142909-17 where the issue is fixed. Try patching the system using the Solaris Patchset downloadable from My Oracle Support and see if you have the same issue after patching. See Doc ID 1273718.1 for step by step instructions for locating the Patchset.