My customer has some ZFS filesystems on CLARiiON storage. They take snapshots of the filesystems and mount them up on an alternate server for backups. The zpools are <b>not</b> exported when the snapshot is taken. The "zfs import" takes a tremendous amount of time in my opinion. Here's a "ps" list of one of the server's zpool list after mounting:
As you can see from the start times of the processes it is taking more than 10 minutes in some instances to import a pool. Took almost an hour and a half to import ten (10) zpools... There are a few zpools on the backup server:
<pre>$ zpool list | tail +2 | wc -l
Here's the command I am using to perform the import:
Sorry to bump such an old thread but this issue has gotten much worse for me. We moved the server's storage to Symmetrix and the import times have gotten much worse. Look at the start times for the zpool processes. This import script started at 00:30:
Over an hour to import one zpool (v05)? That's insanely slow in my opinion. The server does have hundreds of LUNs configured. If we can't get this resolved, it will preclude us from using ZFS in the future.
I'm on vacation and I don't have access to my usual resources...
This looks like a problem looking up the devices (remember simpler devices are always better), but you can rule
out any hanging file system issues by trying to import one of these pools without mounting the file systems. Check
the man page for this option.
We also did some work to speed up pool device look ups and your OS info looks like you might not have the fix
for this problem. I can look this up next week.
If you have ruled out hanging file systems, then my suggestion is to upgrade to at least s10u9,
which includes zpool import fixes. You might consider using S11, which has more zpool import
and device lookup improvements.
I understand your solution. I just wish there was a more elegant solution to the problem. I'm also dealing with EMC LUNs and PowerPath and my device names change all the time (they are BCV/Clone devices and everytime they add/delete/change my PowerPath devices get renumbered...). I'll play around with this concept and see if I can make something manageable.
Right now we only have two servers that are utilizing zpools but we have about a dozen waiting in the wings to convert from VxFS to ZFS. If I can't find a suitable solution, I may have to change my plans...
Well, using the EMC "inquiry" tool I was able to write a job that will repopulate a device directory with valid PP devices which zpool seems to be able to use (I will run this every night so it will pickup on device name changes):