I've posted this question to the OpenSolaris storage mailing list already, but it looks rather inactive, so I am reposting it here:
I have a Solaris Express 11 playbox with COMSTAR on an Emulex LP9802DC in target mode which is used as a storage backend in a 2 GB/s FC environment. It exposes zfs volumes from the "tank" pool. The FC client hosts are Linux machines connecting through QLA ISP2312 based HBAs. When accessing the target, I can see considerable latency for each command given. Running something like "dd if=/dev/sdc bs=1M of=/dev/null" along with iostat -x 1 -m /dev/sdc on the Linux machines gives me the following numbers most of the time:
What I've found out so far is that the rate is mainly limited by the latency and the request size - the maximum queue length is 2, the request size 128K and combined with a latency of 22 ms you get 1000 ms/s * 2 commands / 22 ms = 90 commands / second. Multiply this by 128 K/command and you'll get a maximum throughput 11 MB/s. I have no idea how to effectively change the queue length and request size parameters, so I am stuck with finding out what is causing the latency.
Every 5 seconds, when Solaris flushes the caches and the ZIL I see considerable latency decrease and thus a throughput increase on the Linux host:
Sorry, I forgot to update this question's status here:
the power management (especially the CPU PM) on the Solaris machine was interfering so badly that all system latencies were ridiculously high. Disabling the PM (editing /etc/power.conf, setting "autopm" and "cpupm" to "disable" and running pmconfig afterwards) helped a lot.