This discussion is archived
1 2 Previous Next 18 Replies Latest reply: Aug 27, 2013 3:05 AM by user381877 RSS

lucreate zfs can't exclude or include FS, what to do?

user381877 Newbie
Currently Being Moderated
Somehow, i cannot find the topic anymore, so i repost my last message:

Even if this topic is older, it is still valid.

IMHO, the concept is wrong in this place!

Upgrading the global zone in a new BE should of course be possible without touching local zones.

There is a way missing to exclude local zones from lucreate - otherwise, you create snapshots of the local zones and patch global and local zones alike, which is not what you want!

In case this is not clear:

The local zones are running, so data are changing on them - maybe including databases, for example.
Having snapshots of them is totally pointless.

Instead, you want to create a BE for only the global zones, patch it, luactivate it and boot into it.

Only then, you can attach the local zones and update them to the current version.

So, it is totaly pointless to create snapshots of local zones when creating a new BE.

This would only be of use, if the local zones contain to dynamic data - or are down anyway.
  • 1. Re: lucreate zfs can't exclude or include FS, what to do?
    bobthesungeek76036 Pro
    Currently Being Moderated
    992165 wrote:
    ....
    There is a way missing to exclude local zones from lucreate - otherwise, you create snapshots of the local zones and patch global and local zones alike, which is not what you want!
    Au contraire! I think you are not understanding how Solaris zones work. There is <b>one</b> kernel running on the system. The zones run under the same kernel. So how smart would it be to patch/upgrade the kernel and have the runtime libraries of the zone not patched? Including the zones in the BE is actually a feature and not a bug.

    If you want to patch w/o the zone then detach it and perform an upgrade on attach after patching/upgrading...
  • 2. Re: lucreate zfs can't exclude or include FS, what to do?
    user_donovan Newbie
    Currently Being Moderated
    I have to agree with the original post to this thread. In creating a new BE there should be an option provided to exclude non-global zones. Why? Out here in real life, shutting down ng-zones may not be an option, or difficult to schedule. Having an ability to create a new BE without any of the ng-zones would afford a sysadmin to set up for a single downtime to get the global upgraded. And then deal with the non-global zones separately. The sysadmin could always use zone attach with upgrade or move the zones to another host.

    If such an override option were allowed to exclude ng-zones on lucreate, the newly created BE configuration could be created minus any ng-zone configuration. All the while the original BE would keep ng-zone configuration, whether or not attached, detached, running, etc. There would need to be a stiff warning issued with such an override option. That is, some indication that the new BE would not have any ng-zones defined, and that proper planning would be in order for a list of ng-zones.

    Without such an override option to skp ng-zones, out here in the field we're faced with multiple down times for liveupgrade... which sort of defeats the primary purpose of liveupgrade.
  • 3. Re: lucreate zfs can't exclude or include FS, what to do?
    800381 Explorer
    Currently Being Moderated
    Like bobthesungeek said - there's ONE kernel. It's not possible to upgrade just half of it.

    If you don't want to upgrade the zone, migrate it BEFORE you do the lucreate.

    If uptime is so critical, you do have more than one host and/or zone, right? And you're set up so you can have at least one down for patching/upgrade/hardware failure with no loss of availability?
  • 4. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

  • 5. Re: lucreate zfs can't exclude or include FS, what to do?
    cindys Pro
    Currently Being Moderated

    You should be able to create a new BE by using lucreate and with local zones attached in a supported zones configuration. Live Upgrade was developed way before zones and then had to be reworked to support zones so we must offer a set of LU/zones configurations.

     

    The latest Solaris 10/ZFS/LU/zones configuration information is here:

     

    Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade) - Oracle Solaris ZFS Administratio…

     

    I hope the above link works. This editor was nice enough to convert it for me.

     

    Thanks, Cindy

  • 6. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    Thanx for the reply, but there is not question that a BE can be created with Zones and Zpools.

    The question is, how to create one *without* them ... a way to *exclude* them when creating the BE.

     

    The Zones and their Zpools are live - for what would you need a snapshot of them?

     

    You just want a snapshot / BE for the global zone, so you can patch it already - meanwhile the zones are running and changing, so a snapshot would directly be old!

    You can only make a snapshot of them, when they are down! Otherwise, your snapshot is useless ...

     

    Currently, you need a downtime of all zones to create a BE for the global zone and be able to run "lucreate" without including the Zones.

    Then, you can start the Zones again and continue business ...

    Then, you can patch / upgrade the BE and need a second downtime for all zones incl. the global zone to boot into the the new BE and then can attach and boot the zones.

     

    This is one downtime for the local zones more than realy required!

    If you could exclude them, you only need ONE downtime for all zones ...

     

    Was that more clear?

  • 7. Re: lucreate zfs can't exclude or include FS, what to do?
    cindys Pro
    Currently Being Moderated

    Yes, your request is stated more clearly.

     

    And the problem with detaching and attaching the zones before and after the LU is that they

    are updated on attach and you don't want them updated or you don't want them updated

    automatically.

     

    Is this correct?

     

    Thanks, Cindy

  • 8. Re: lucreate zfs can't exclude or include FS, what to do?
    heider Newbie
    Currently Being Moderated

    What is the purpose of creating a new BE without containing the zones? Am I correct in saying that your end goal is to patch your global and non-global zones?

     

    This should be a simple 4 step process with ZFS

     

    1. lucreate -n newbe

    2. luupgrade -t -n newbe -s patch_path (assuming you are patching)

    3. luactivate newbe

    4. init 6

     

    You mentioned constantly changing data such as database files. Your database should not be running in a critical filesystem such as /, /usr, /var, and /opt. Assuming it is located in a user-defined filesystem, it will not be copied to the new boot environment (it will be mounted to the new BE after activation). So there should not be any issue with data integrity. See below excerpt from the lucreate man pages:

     

    The lucreate command makes a distinction  between  the  file

         systems  that  contain  the  OS-/,  /usr, /var, and /opt-and

         those that do not, such as /export, /home, and other,  user-

         defined file systems. The file systems in the first category

         cannot be shared between the source  BE  and  the  BE  being

         created;  they  are  always copied from the source BE to the

         target BE. By contrast, the user-defined  file  systems  are

         shared  by default. For Live Upgrade purposes, the file sys-

         tems that contain the OS are referred  to  as  non-shareable

         (or  critical) file systems; other file systems are referred

         to as shareable. A non-shareable file system listed  in  the

         source  BE's  vfstab  is copied to a new BE. For a shareable

         file system, if you specify a destination  slice,  the  file

         system is copied. If you do not, the file system is shared.

     

    In the end, the only downtime involved is the time it takes to reboot.

  • 9. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    Somehow

     

    Mostly, i want to reduce the downtimes.

     

    Currently, to be able to create a new BE, i need to get the zones down.

     

    Then i can patch the BE.

     

    After that, i need system downtime to activate the new BE. In this step, the zones can be attached.

    If i could create the new BE without the need to get the zones down, i would reduce the downtime from 2 to 1.

    That would be great.

     

    I will answer to heider´s reply below in broader way, maybe this will help - need some time, as i am on a schedule already

     

    Thanks!

  • 10. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    Hello heider,

    thanks for your reply.

     

    Nobody i know of would patch zones like you write.

     

    Even if it is "only" /var for example, it contains numerous live data and logfiles, maybe even users changed and whatever else.

    In an enterprise environment, you do not want to use an older snapshot from zones to be patched and used!

    You cannot control whatr part of the local zone *may* be changed between your snapshot and the finaly reboot / attach.

    You will *always* loose data - and if it only is the last login data vom wtmp. In an enterprise environment, even this is critical data!

     

    What you wrote, always means that there is a gap between what was running last and what will be patched and used after the reboot / attach.

     

    So far, for this reason, i only know customers who use two downtimes - one to stop the local zones, JUST to be able to create a new BE for the global zone, and one to finally patch this BE, reboot and after that attach the *still current* local zones.

    Only in this way, you do not loose data!

     

    If you could exclude the local zones, this problem would not arise and you can save one downtime for the local zones.

     

    What you wrote, for me, seems just to be the repeated recitation of the current situation - i still hope, that someone is trying to see the actual problems and, the realy simple solution ...

     

  • 11. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    I tried a more graphical attempt:

     

    Your way:

     

    Status: GZ-A, LZ1-A, LZ2-A, LZ3-A, ...

    lucreate

    Status: GZ-B, LZ1-B, LZ2-B, LZ3-B, ...

    luactivate

    Status: GZ-A, LZ1-A, LZ2-A, LZ3-A, ...

     

    So, all zones pop back into state "A", from the moment the BE was created.

    All changes after "lucreate" to any zones get lost!

     

     

    The logical, lossless way:

     

    Status: GZ-A, LZ1-A, LZ2-A, LZ3-A, ...

    lucreate without local zones

    Status: GZ-B, LZ1-B, LZ2-B, LZ3-B, ...

    luactivate

    Status: GZ-A, LZ1-B, LZ2-B, LZ3-B, ...

     

    So, only the global pops back into state "A", from the moment the BE was created.

    The more important, local zones, which run services, just continue from the last working online state - just what is needed

    Only the global zones looses data - but this is the price we pay for live upgrade.

    But it is NOT necessary for the local zones! And it should not ...

  • 12. Re: lucreate zfs can't exclude or include FS, what to do?
    heider Newbie
    Currently Being Moderated

    I will add a few more points which may help you understand the process. First off, the method that I listed is part of the documented Oracle Solaris 10 patching strategy:

    http://www.oracle.com/technetwork/articles/servers-storage-admin/solaris-patching-strategy-257476.pdf

     

    This document outlines the steps for upgrading and patching with Live Upgrade: http://www.oracle.com/technetwork/server-storage/solaris/solaris-live-upgrade-wp-167900.pdf

     

    It sounds like you are already familiar with the commands and steps involved, so I am not saying you need to review these documents. I am just listing these, because none of them state that it is necessary to shutdown zones prior to the live upgrade process. They do state that the only downtime involved is during the reboot.

     

    Regarding your question on losing data and and the state between the boot environments becoming out of sync, I would reccomend that you review the man page for "synclist". The /etc/lu/synclist file contains directives for what files should be syncronized between the boot environments. This syncronization list contains, by default, items such as /var/adm/messages, /etc/passwd, and /etc/shadow. This synclist file is used to maintain the important system files between activation of new boot environments. If you have critical application/database data that needs to be in the most current state between boot environments, then that data should be located (as I mentioned above) in a user-defined filesystem.

  • 13. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    I need to admit, that i did not knew about "synclist", and it sounds like a somewhat usable workaround.

     

    Still, from character, it is still only a workaround, as you need to be quite sure which files have or may have changed!

    It works ... somehow.

     

    The most easly solution for getting a attached / patched version of the *last* running state of local zones is quite simple:

     

    Just halt the zone, attach it, boot it.

    Like you would with a regular physical server.

     

    And this could *easily* be done by adding an option to exclude running zones from the "lucreate" command.

     

    The current implementation is much more complicated and instable, and this without any real technical reasons.

    Just because "things are as they are".

  • 14. Re: lucreate zfs can't exclude or include FS, what to do?
    user381877 Newbie
    Currently Being Moderated

    I tested the recommended approach and directly stumbled upon problems:

     

    After creating a new BE, while Zones are up and running, the new snapshots compromise the Veritas Cluster monitoring:

     

    2013/07/09 15:45:29 VCS WARNING V-16-10001-20004 (servername) ZpoolDMP:test1_msc_mod:monitor:Warning: The filesystem test1_msc/zone-newBE with mountpoint /opt/zones/zonename-newBE is not mounted. Administrative action may be required

     

    So, this directly does not work.

     

    Creation of a new BE with zones running is not possible out of the box, when running within a Veritas cluster, which many, many customers do.

     

    And yes, i expect Live Upgrade to work in a clustered environment ...

     

    So, back again:

     

    Can we PLEASE add a "-Z" option to exclude any local zones from "lucreate"?

    Thank you.

1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points