This discussion is archived
8 Replies Latest reply: Nov 12, 2012 3:01 AM by 964794 RSS

Solaris 10

964794 Newbie
Currently Being Moderated
Hi,

I would need to convert my ZFS files systems to pre Solaris Volume Manager (SVM) is there any way, tool ... to do it

This is the hardware and SO:

SunOS 5.10 Generic_147440-10 sun4u sparc SUNW,Sun-Fire-V490

any ideas on how this can be acheived?
suggest please Thanks,
  • 1. Re: Solaris 10
    797449 Newbie
    Currently Being Moderated
    There is no way to convert an ZFS pool into SVM volume manager.

    however, if you have a mirrored pool, you may consider
    - detaching the mirror drive and initialize it with svm and then copy your data manually to the new SVM volume.
    - Test
    - Destroy the ZFS pool and initialize and attach the disk to the SVM volume.

    Regards,

    Ushas Symon
  • 2. Re: Solaris 10
    964794 Newbie
    Currently Being Moderated
    I have tried to migrate solaris 10 (zfs) to (SVM), By LiveUpgrade be "copied" the OS FS zfs to ufs structure (SVM), but this message appears from LiveUpgrade:

    ERROR: File system template option -m is not allowed when BE has ZFS root

    We need to migrate the OS within rpool (zfs) to SVM. Is there a procedure?

    Thanks

    Victorio
  • 3. Re: Solaris 10
    797449 Newbie
    Currently Being Moderated
    you may try flash archive image using flarcreate, and restore it to new UFS.

    Regards
    Ushas Symon
  • 4. Re: Solaris 10
    964794 Newbie
    Currently Being Moderated
    Hi Ushas.

    Progress ..... ...

    I trying to run flarcreate:

    command: flarcreate -n solaris10 -c -S -R / /imagen/solaris10.flar

    I've had to remove the "-x" because I get the following:

    "ERROR: The A, f, F, x, x, y and z options are not compatible with archiving to ZFS pool"

    Then I try to boot from DVD and select "Flash Image" , so when I select the image Flar created appears:

    "ERROR: The file type is ZFS can not be used to install a system"

    I think it does not work fine with zfs. Is there something that I'm not doing well, when I run the flar create?

    thanks very much for your input

    Victorio
  • 5. Re: Solaris 10
    797449 Newbie
    Currently Being Moderated
    Hi Victorio,

    Why do you want to go backward to SVM when zfs root pool is superior to it and has many advanced features?

    Regards
    Ushas Symon
  • 6. Re: Solaris 10
    964794 Newbie
    Currently Being Moderated
    The company prefers to have all servers with Solaris 10 and SVM filesystem, unified. "For now".
    Any suggestions would be most appreciated

    Thanks
  • 7. Re: Solaris 10
    cindys Pro
    Currently Being Moderated
    I don't know of any automated tools that can convert a ZFS root file system to a UFS root file system.
    The Solaris tools are generally to migrate UFS to ZFS and not the other way around.

    You can use tar or cpio to copy the ZFS data into a UFS file system, but this method is probably not
    very helpful for copying a root file system.

    I think you are better off re-installing one of your systems back to UFS and then using that image
    to replicate/install other systems.

    Thanks,

    Cindy
  • 8. Re: Solaris 10
    964794 Newbie
    Currently Being Moderated
    **migration procedure ZFS to SVM, with flart option** it working fine.

    host007:root:/$ zpool list
    NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
    base_pool 21.9G 4.96G 16.9G 22% ONLINE -
    dump_pool 24G 22.0G 2.00G 91% ONLINE -
    rpool 22.5G 10.0G 12.5G 44% ONLINE -
    swap_pool 41.8G 132K 41.7G 0% ONLINE -

    zpool export base_pool


    We break the zfs MIRROR SYSTEM: rpool, swap, dump

    host007:root:/$ zpool detach rpool c4t500000E01398B4B0d0s0
    host007:root:/$ zpool detach dump_pool c4t500000E01398B4B0d0s3
    host007:root:/$ zpool detach swap_pool c4t500000E01398B4B0d0s1


    we generate the flartcreate as follows:

    flarcreate -n solaris10_host007 -c -L cpio -S -R / /mnt/solaris10_host007.flar


    we started installing solaris10, SVM
    {10} ok boot cdrom

    with option  F4_Flash


    MONTAMOS EL DISCO DEL SISTEMA DONDE SE ENCUENTRE SVM


    We mount syatem disk where is SVM.
    mkdir -p /tmp/root/a
    host007:root:/$ mount /dev/dsk/c4t500000E01398B4B0d0s0 /tmp/root/a
    rm /tmp/root/a/etc/path_to_inst
    rm -rf /tmp/root/a/devices/*
    rm -rf /tmp/root/a/dev/*
    cd /devices; find . | cpio -pmd /tmp/root/a/devices
    cd /dev; find . | cpio -pmd /tmp/root/a/dev
    devfsadm -C -r /tmp/root/a -p /tmp/root/a/etc/path_to_inst -v
    cp /etc/path_to_inst /tmp/root/a/etc/


    updating files:

    echo "domain host.es.cert" >  /tmp/root/a/etc/resolv.conf
    echo "nameserver 10.130.122.134" >> /tmp/root/a/etc/resolv.conf
    echo "10.130.190.1" >  /tmp/root/a/etc/defaultrouter
    echo "10.130.190.33 netmask 255.255.255.0" > /tmp/root/a/etc/hostname.ce1
    echo "10.130.190.4 netmask 255.255.254.0" > /tmp/root/a/etc/hostname.ce2
    echo "10.130.190.10 netmask 255.255.255.0" > /tmp/root/a/etc/hostname.ce3
    echo "host007" > /tmp/root/a/etc/nodename

    vi /tmp/root/a/etc/vfstab


    {10} ok boot disk1


    host007 console login:



    We encapsulate the O.S, SVM (root)

    cp /etc/system /etc/system.orig_16102012
    cp /etc/vfstab /etc/vfstab.orig_16102012


    host007:root:/etc$ /usr/sbin/metadb -a -c3 -f /dev/dsk/c4t500000E01398B4B0d0s3 /dev/dsk/c4t500000E01398B4B0d0s4
    host007:root:/etc$ metadb -i
    host007:root:/etc$ metainit -f d11 1 1 c4t500000E01398B4B0d0s0
    d11: Concat/Stripe is setup
    host007:root:/etc$ metainit d10 -m d11 1
    d10: Mirror is setup
    host007:root:/etc$ metaroot d10
    host007:root:/etc$ metainit -f d21 1 1 c4t500000E01398B4B0d0s1
    d21: Concat/Stripe is setup
    host007:root:/etc$ metainit d20 -m d21 1
    d20: Mirror is setup
    host007:root:/etc$ metainit -f d31 1 1 c4t500000E01398B4B0d0s5
    d31: Concat/Stripe is setup
    host007:root:/etc$ metainit d30 -m d31 1
    d30: Mirror is setup
    host007:root:/etc$ metainit -f d41 1 1 c4t500000E01398B4B0d0s6
    d41: Concat/Stripe is setup
    host007:root:/etc$ metainit d40 -m d41 1
    d40: Mirror is setup
    host007:root:/etc$ metainit -f d51 1 1 c4t500000E01398B4B0d0s7
    d51: Concat/Stripe is setup
    host007:root:/etc$ metainit d50 -m d51 1
    d50: Mirror is setup


    host007:root:/$ cd /etc
    host007:root:/etc$ vi vfstab
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    #
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c4t500000E01398B4B0d0s1 - - swap - no -
    /dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no -
    /dev/dsk/c4t500000E01398B4B0d0s5 /dev/rdsk/c4t500000E01398B4B0d0s5 /usr ufs 1 no -
    /dev/dsk/c4t500000E01398B4B0d0s6 /dev/rdsk/c4t500000E01398B4B0d0s6 /var ufs 1 no -
    /dev/dsk/c4t500000E01398B4B0d0s7 /dev/rdsk/c4t500000E01398B4B0d0s7 /opt ufs 2 yes -
    /devices - /devices devfs - no -
    sharefs - /etc/dfs/sharetab sharefs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    host007:root:/etc$
    host007:root:/etc/init.d$ init 6


    MIRROR DISCS

    OS DISKs:

    AVAILABLE DISK SELECTIONS:
    0. c4t500000E01398A4E0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> (c3t0d0) ZFS
    /scsi_vhci/ssd@g500000e01398a4e0

    1. c4t500000E01398B4B0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> (c3t1d0) SVM
    /scsi_vhci/ssd@g500000e01398b4b0


    1) We delete POOLS:

    zpool destroy base_pool
    zpool destroy dump_pool
    zpool destroy rpool
    zpool destroy swap_pool


    2) format -e c4t500000E01398A4E0d0
    label
    <answer] …. [1] option: 0

    3) prtvtoc Ensure that the partition table of both disks are identical:

    # prtvtoc /dev/rdsk/c4t500000E01398B4B0d0s2 | fmthard -s - /dev/rdsk/c4t500000E01398A4E0d0s2

    4) check: format


    5) update the metadb with two copies of the new disk:

    /usr/sbin/metadb -a -c3 /dev/dsk/c4t500000E01398A4E0d0s3 /dev/dsk/c4t500000E01398A4E0d0s4

    metadb -i

    metainit d12 1 1 /dev/dsk/c4t500000E01398A4E0d0s0
    metainit d22 1 1 /dev/dsk/c4t500000E01398A4E0d0s1
    metainit d32 1 1 /dev/dsk/c4t500000E01398A4E0d0s5
    metainit d42 1 1 /dev/dsk/c4t500000E01398A4E0d0s6
    metainit d52 1 1 /dev/dsk/c4t500000E01398A4E0d0s7


    metattach d10 d12
    metattach d20 d22
    metattach d30 d32
    metattach d40 d42
    metattach d50 d52


    metastat |grep -i resync
    metastat |grep -i progress

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points