i have a zfs 7320 storage appliance connected via brocade fiber switches to clustered linux servers. one zfs controller with four fiber connections each server has two fiber connections. there are redundant switches so zfs is connected with two fiber connections to each switch and each server is connected one fiber connection to each switch. the zfs appliance manual does not have oel or redhat or any other similar 6.x multipath configuration. oracle support or sales has not been able as of yet to find multipath config for 6.x and suggested i post it here. Since I had issues that seem to be resolved with new zfs appliance software I want to verify my multipath config before i put the unit back in production, I would feel much better with a known good config.
This is what is in the manual for 5.x linux:
product "Sun Storage 7310" or "Sun Storage 7410" (depending on storage system)
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
This is what seems to work with 6.x:
product "ZFS Storage 7320"
getuid_callout "/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
DM Multipath installs some documentation with a sample configuration that lists many different storage systems, including SUN. Perhaps you can find an appropriate or similar configuration there.
# cat /usr/share/doc/device-mapper-multipath-*/multipath.conf.defaults
good idea. unfortunately the sun hardware in those examples are quite old and very different from the ZFS appliance which is really a server with ssd in it connected to a jbod "hybrid" between ssd and standard spinning disks
Nice to see a proper redundant storage fabric layer for once on OTN. ;-)
We've been in a similar position using multipath with a custom storage layer (Infiniband based). Like you we hunted for a best-match config - and then stress tested the fabric layer using fio (Flexible I/O Tester). And don't forget to pull some fibre cables, or shutdown/powerdown one of the fibre channel switches, while fio tests are running.