I tested the volume based iSCSI method and performance wise it sucks donkey d1ck.
If it had been 50%, slower we could always buy another node, but the performance was truly pathetic.
My make shift benchmark comprises of cp:ing 10 gigs of random test data from the local disk to the HA mirror of iSCSI disks and a sync at the end.
Volume based iSCSI results 17m25.932s total write time, 9.8 MB/sec
XXd0s0 whole disk iscsi 2m13.468s total write time, 77 MB/sec
I made the test configuration as zpool->volume->iSCSI->efi-part according to the following instructions:
www tokiwinter com / solaris-cluster-4-1-part-two-iSCSI-quorum-server-and-cluster-software-installation /
I made the volume 5% smaller than the available space to account for metadata, spare blocks, ect.
I did a little digging into my current partition tables and there is corruption/weirdness in in both the XXXd0 and XXd0s0 cases.
The XXXd0s0 configuration has not seen a lot of reads or writes, that why it still appears to be ok.
I’m convinced that in 6 month, it will be in just as bad condition as the XXXd0 configuration.
prtvtoc:ing one of the XXXd0 disks shows:
* First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 24 00 256 524288 524543 1 4 00 524544 286198368 286722911 8 11 00 286722912 16384 286739295 24 255 1030 0 9007199254741057538 9007199254741057537 28 255 6332 3256158825667702840 3906306445338542081 7162465271006244920 29 255 3038 3559300795614704941 18143602103762549516 3256158825667702840
I was impressed by partition 24 being 4194304000.00 TB or 4 zettabyte disk size. That’s a great compression ratio for a 146G drive
Has anyone in the community successfully used iSCSI without zpools and volumes on the physical disk?
If you have, please share your results and configuration. I’ve hit the wall with this one.