This content has been marked as final. Show 8 replies
What version of solaris are you using? If its not opensolaris, that would be worth a try so you can use the comstar iscsi framework..
I believe thats supposed to have much better performance..
Both machines running Solaris 10 5/09 (Update 7). I found a couple of sites which recommend using OpenSolaris with the new COMSTAR framework.
It's just I would prefer to get some comments from other users who tried Solaris 10 with iSCSI before installing OpenSolaris, so I can make sure it is not worth to do further testing&tuning.
I increased the number of sessions for the target on the initiator to 4-sessions which almost doubled the performance.
But still far away from 10G. Does anybody know about a limitation regarding the number of sessions for a target?
Four sessions was the highest value iscsiadm accepted.
Well, your luck your at least at Solaris 5/09. Prior to that release, Solaris 10 iSCSI performance had real problems..
Still its probably worth patching up to date or running recommended patch set through.
Just in case theres something that helps..
I'm working with a USS7410C with 10Gb Sun Ethernet Cards and I'm not
able to take more than 100MB/sec (write).
I think with 10Gb card unmask iSCSI throughput problems, otherwise covered with
1Gb onboard interfaces (take into account 100MB/sec are on the edge of 1Gb interface).
We have an scalation open with Sun support but we are still working ...
were you able to get higher than 100MB/s ?
The answer could be .. don't use ZFS volumes
I have an SSD and a simple Seagate disk. Both on SATA. Same capacity of 74.53GB
If I use those disks to create a pool and make volumes on the pool and then
use the shareiscsi property to create a target (and there is plenty of documentation
describing that as the easy way to do it ) the performance
If on the other hand I use iscsitadm to make a target out of the devices of two disks using commands like this
iscsitadm create target --type raw -b /dev/dsk/c4t0d0
iscsitadm create target --type raw -b /dev/dsk/c4t1d0
and then on the initiator create the pool then the performance is entirely different!
A bit more detail:
For example I built a target out of a zfs volume then on the initiator I wrote to the pool using zfs send/recv with about 3.2GB. It took 26 minutes 24 sec - pathetic.
I almost gave up waiting!
So i exported the these useless zfs pools, removed the static-configs etc and cleared the zfs property etc and started again.
If the target is instead built "manually" using iscsitadm e.g. "iscsitadm create target --type raw -b /dev/dsk/c4t0d0" (dsk or rdsk it doesn't seem to make much difference)
the same write test takes just 3 min 12sec.
So in this example Solaris 10 10/09 (x86) the performance degrades by a factor of 8 if you use zfs volumes. Nice idea ... easy to use but ... terrible overhead.
At least for SATA disks.
Incidentally for a SATA SSD disk the factor is not so bad (5 min vs 3 min) . However an ordinary disk like ST3808110AS .. 8 times slower with zfs volumes.
No idea why .. but easily reproducible.
So .. what do you see if you don't use zfs volumes but instead use the disk device name itself (whole disk remember - not a slice)
For target and initiator I was using Solaris 10 10/09 (x86) on a v20z with 3rd party eSATA card. SunOS kernel patched recently to 5.10 Generic_142901-10 on the target.
[The post of Gfmitchell is detailed , and i know more about iSCSI from these.!|http://www.highwaytowatches.com]