1 Reply Latest reply: Feb 28, 2014 11:21 AM by 997590 RSS

    What logical device for COMSTAR whole disk iSCSI?

    997590

      Hi,

       

      We have a poor man’s cluster with data redundancy through locally mirrored iSCSI disks.

      Solaris 11.1 x86 + Solaris Cluster 4.1

       

      Most all of the instructions I found describe mirroring the physical disks on the SAN, making a ZFS volume and iSCSIing this volume.

      We’d like to use the whole disk in raw mode without any filesystem between the physical SAS disks and the iSCSI target.

       

      What is the correct logical device for stmfadm create-lu ?

       

      One of our disks pairs was configured with LU data file /dev/rdsk/c0tXXXd0

      After a few months of use, the zpool has become degraded and scrub fails to fix the problem. zpool status claims both disks are degraded, but smartctl test gives both physical disks a clean bill of health.

      We used this zpool/zones for testing automatic failover during node failure, so it is very possible that this pool had been corrupted in the course of these tests and the logical device selection has nothing to do with the problem.

       

      We have another pair of disks configured with /dev/rdsk/c0tXXXd0s0

      I benchmarked the two configurations by copying data to a dataset on the mirrored iSCSI disks.

      /dev/rdsk/c0tXXXd0s0 gave a result of 80 MB/sec while the /dev/rdsk/c0tXXXd0 configuration gave a result of 50 MB/sec.

      This may be due to the fact that the /dev/rdsk/c0tXXXd0 configuration is degraded.

       

      I've seen some references to using c0tXXXd0p0 but I haven’t had a chance to try this out yet.

       

       

      Thanks for your help and advice.

      Mikko

        • 1. Re: What logical device for COMSTAR whole disk iSCSI?
          997590

          I tested the volume based iSCSI method and performance wise it sucks donkey d1ck.

          If it had been 50%, slower we could always buy another node, but the performance was truly pathetic.

           

          My make shift benchmark comprises of cp:ing 10 gigs of random test data from the local disk to the HA mirror of iSCSI disks and a sync at the end.

          Volume based iSCSI results          17m25.932s total write time, 9.8 MB/sec

          XXd0s0 whole disk iscsi                2m13.468s total write time, 77 MB/sec

           

          I made the test configuration as zpool->volume->iSCSI->efi-part according to the following instructions:

          www tokiwinter com / solaris-cluster-4-1-part-two-iSCSI-quorum-server-and-cluster-software-installation /

          I made the volume 5% smaller than the available space to account for metadata, spare blocks, ect.

           

           

          I did a little digging into my current partition tables and there is corruption/weirdness in in both the XXXd0 and XXd0s0 cases.

          The XXXd0s0 configuration has not seen a lot of reads or writes, that why it still appears to be ok.

          I’m convinced that in 6 month, it will be in just as bad condition as the XXXd0 configuration.

           

          prtvtoc:ing one of the XXXd0 disks shows:

          *                          First     Sector    Last
          * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
                 0     24    00        256    524288    524543
                 1      4    00     524544 286198368 286722911
                 8     11    00  286722912     16384 286739295
                24    255    1030          0 9007199254741057538 9007199254741057537
                28    255    6332  3256158825667702840 3906306445338542081 7162465271006244920
                29    255    3038  3559300795614704941 18143602103762549516 3256158825667702840
          

           

          I was impressed by partition 24 being  4194304000.00 TB or 4 zettabyte disk size.  That’s a great compression ratio for a 146G drive

           

          Has anyone in the community successfully used iSCSI without zpools and volumes on the physical disk?

          If you have, please share your results and configuration. I’ve hit the wall with this one.

           

           

          -Mikko