This discussion is archived
6 Replies Latest reply: Aug 23, 2012 2:57 PM by bobthesungeek76036 RSS

"zfs import" takes a very long time...

bobthesungeek76036 Pro
Currently Being Moderated
My customer has some ZFS filesystems on CLARiiON storage. They take snapshots of the filesystems and mount them up on an alternate server for backups. The zpools are <b>not</b> exported when the snapshot is taken. The "zfs import" takes a tremendous amount of time in my opinion. Here's a "ps" list of one of the server's zpool list after mounting:

<pre>$ ps -ef|grep zpool-<hidden> | grep -v grep | sort
root 25543 0 0 00:58:15 ? 4:55 zpool-<hidden>-v01
root 25849 0 0 01:04:23 ? 39:37 zpool-<hidden>-v02
root 26115 0 0 01:09:49 ? 1:38 zpool-<hidden>-v03
root 26320 0 0 01:15:29 ? 2:01 zpool-<hidden>-v04
root 26585 0 0 01:20:48 ? 2:17 zpool-<hidden>-v05
root 26876 0 0 01:26:01 ? 4:02 zpool-<hidden>-v06
root 27645 0 0 01:36:06 ? 72:33 zpool-<hidden>-v07
root 28405 0 0 01:48:04 ? 1:29 zpool-<hidden>-v08
root 28734 0 0 02:01:22 ? 2:10 zpool-<hidden>-v09
root 28917 0 0 02:13:52 ? 1:41 zpool-<hidden>-v10
$</pre>

As you can see from the start times of the processes it is taking more than 10 minutes in some instances to import a pool. Took almost an hour and a half to import ten (10) zpools... There are a few zpools on the backup server:

<pre>$ zpool list | tail +2 | wc -l
42
$</pre>

Here's the command I am using to perform the import:

<pre>zpool import -R <subdir-for-backups> -f <poolname></pre>

I did a little scan and it looks like most of the zpools are at version 22.

Backup server is at U6:

<pre>$ head -1 /etc/release
Solaris 10 10/08 s10s_u6wos_07b SPARC
$</pre>

Any ideas on how to speed up the imports?
  • 1. Re: "zfs import" takes a very long time...
    bobthesungeek76036 Pro
    Currently Being Moderated
    Sorry to bump such an old thread but this issue has gotten much worse for me. We moved the server's storage to Symmetrix and the import times have gotten much worse. Look at the start times for the zpool processes. This import script started at 00:30:

    <pre>$ ps -ef|grep zpool-<hidden> | grep -v grep | sort +4
    root 23710 0 0 01:25:07 ? 0:52 zpool-<hidden>-v01
    root 4460 0 0 01:51:46 ? 15:14 zpool-<hidden>-v02
    root 16636 0 0 02:28:37 ? 1:01 zpool-<hidden>-v03
    root 26917 0 0 03:52:42 ? 9:35 zpool-<hidden>-v04
    root 20036 0 0 04:49:59 ? 0:47 zpool-<hidden>-v05
    root 6927 0 0 05:43:07 ? 0:47 zpool-<hidden>-v06
    root 11920 0 0 06:20:54 ? 0:48 zpool-<hidden>-v07
    root 20876 0 0 07:18:55 ? 3:16 zpool-<hidden>-v08
    root 27639 0 0 08:09:40 ? 51:43 zpool-<hidden>-v09
    root 14623 0 0 08:29:15 ? 0:47 zpool-<hidden>-v10
    root 1524 0 0 08:47:05 ? 0:47 zpool-<hidden>-v11
    root 19357 0 0 09:07:30 ? 0:47 zpool-<hidden>-v12
    $</pre>

    Over an hour to import one zpool (v05)? That's insanely slow in my opinion. The server does have hundreds of LUNs configured. If we can't get this resolved, it will preclude us from using ZFS in the future.

    Please help!!!
  • 2. Re: "zfs import" takes a very long time...
    cindys Pro
    Currently Being Moderated
    Hi Bob,

    I'm on vacation and I don't have access to my usual resources...

    This looks like a problem looking up the devices (remember simpler devices are always better), but you can rule
    out any hanging file system issues by trying to import one of these pools without mounting the file systems. Check
    the man page for this option.

    We also did some work to speed up pool device look ups and your OS info looks like you might not have the fix
    for this problem. I can look this up next week.

    Thanks,

    Cindy
  • 3. Re: "zfs import" takes a very long time...
    cindys Pro
    Currently Being Moderated
    If you have ruled out hanging file systems, then my suggestion is to upgrade to at least s10u9,
    which includes zpool import fixes. You might consider using S11, which has more zpool import
    and device lookup improvements.

    Thanks, Cindy
  • 4. Re: "zfs import" takes a very long time...
    maxiking14 Newbie
    Currently Being Moderated
    You wrote: "The server does have hundreds of LUNs configured..."
    Maybe that`s the problem...

    I had the same problem:
    A lot of devices configured ->

    ls -la /dev/dsk/ | wc -l
    41471

    A zpool import takes 45min

    I did the following:

    1) Create a new directory which contain only the first slice of the Disks (we are using EMC Powerpath):

    ls -la /power
    total 421
    drwxr-x--- 2 root root 403 Mar 18 2010 .
    drwxr-xr-x 54 root root 71 Aug 17 08:40 ..
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower101a -> /dev/dsk/emcpower101a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower102a -> /dev/dsk/emcpower102a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower103a -> /dev/dsk/emcpower103a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower104a -> /dev/dsk/emcpower104a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower105a -> /dev/dsk/emcpower105a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower106a -> /dev/dsk/emcpower106a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower107a -> /dev/dsk/emcpower107a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower108a -> /dev/dsk/emcpower108a
    lrwxrwxrwx 1 root root 21 Jan 17 2011 emcpower109a -> /dev/dsk/emcpower109a

    this contains 1 symlink for every disk...

    2) do the zpool import with that directory:
    zpool import -d /power zpoolname

    so the zpool import has to check only ~200 devices in /power and not ~40.000 in /dev/dsk!

    --> now the import takes ~1 minute!

    regards,
    maxiking14
  • 5. Re: "zfs import" takes a very long time...
    bobthesungeek76036 Pro
    Currently Being Moderated
    Well you got me beat on the number of devices:

    <pre>$ ls -la /dev/dsk/ | wc -l
    26651
    $</pre>

    I understand your solution. I just wish there was a more elegant solution to the problem. I'm also dealing with EMC LUNs and PowerPath and my device names change all the time (they are BCV/Clone devices and everytime they add/delete/change my PowerPath devices get renumbered...). I'll play around with this concept and see if I can make something manageable.

    Right now we only have two servers that are utilizing zpools but we have about a dozen waiting in the wings to convert from VxFS to ZFS. If I can't find a suitable solution, I may have to change my plans...
  • 6. Re: "zfs import" takes a very long time...
    bobthesungeek76036 Pro
    Currently Being Moderated
    Well, using the EMC "inquiry" tool I was able to write a job that will repopulate a device directory with valid PP devices which zpool seems to be able to use (I will run this every night so it will pickup on device name changes):

    <pre>
    #!/bin/bash
    #
    declare INQ="/local/bin/inq"
    declare INQ_OPS=" -no_dots -f_powerpath -identifier device_name "
    declare -a DSKLST=(`$INQ $INQ_OPS | grep emcpower | grep -v "/dev/vx/rdmp/" | sed s/rdsk/dsk/g | cut -f1 -d' '`)

    if [ ! -d /dev/zdsk ]; then
    echo "/dev/zdsk not found ... creating"
    mkdir /dev/zdsk
    else
    rm -rf /dev/zdsk/*
    fi

    for i in ${DSKLST[@]}
    do
    SOURCE=$i
    TARGET="/dev/zdsk/${i##*/}"
    ln -s ${SOURCE} ${TARGET}
    done
    </pre>

    Using the "/dev/zdsk" directory, it does speed up the imports quite a bit. The only issue I noticed is that the alternate device location is retained when you do a zpool status:

    <pre>
    $ zpool status server001_rdo-c-v02
    pool: server001_rdo-c-v02
    state: ONLINE
    scrub: none requested
    config:

    NAME STATE READ WRITE CKSUM
    server001_rdo-c-v02 ONLINE 0 0 0
    /dev/zdsk/emcpower705c ONLINE 0 0 0

    errors: No known data errors
    $
    </pre>

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points