3 Replies Latest reply: Jun 27, 2012 1:33 PM by ANHU RSS

    Infiniband bad  outgoing Throughput on 10Gbit HCA

    ANHU
      Hi @ all,

      I have a problem. I would like to use Solaris with ZFS to provide storage for a glusterfs server with nfs.
      My test environment:
      Node 1: CentOS 6.2 with OFED 1.5.4.1
      Node 2: OI 151a4 with native IB and before a Solaris 11 at both Solaris and OpenIndiana have the same result.

      If I run a test with iperf:
      From CentOS to OI throughput around 4.90Gbit/s

      [root@dev-cos62 ~]# iperf -c 1.1.1.2
      ————————————————————
      Client connecting to 1.1.1.2, TCP port 5001
      TCP window size: 193 KByte (default)
      ————————————————————
      [ 3] local 1.1.1.1 port 36173 connected with 1.1.1.2 port 5001
      [ ID] Interval Transfer Bandwidth
      [ 3] 0.0-10.0 sec 5.66 GBytes 4.86 Gbits/sec

      From OI to CentOS throughput only 900Mbit/s

      Croot@dev-oi:~# iperf -c 1.1.1.1
      ————————————————————
      Client connecting to 1.1.1.1, TCP port 5001
      TCP window size: 256 KByte (default)
      ————————————————————
      [ 3] local 1.1.1.2 port 35841 connected with 1.1.1.1 port 5001
      [ ID] Interval Transfer Bandwidth
      [ 3] 0.0-10.0 sec 1.13 GBytes 968 Mbits/sec


      My IB Hardware is a new Mellanox InfiniScale switch and some older 10Gbit Mellanox HCAs. (MTLP23108)

      A second dd test with a ramdisk shared over nfs tel me the maximum of solaris/oi infiniband outgoing throughput "write" is max 1Gbit.


      Have anybody any idea?
      thx and many greets from germany
      Andreas

      Edited by: 942419 on 23.06.2012 17:47