This discussion is archived
2 Replies Latest reply: Sep 11, 2011 2:23 AM by 821215 RSS

Oracle VM 2.2.2 - TCP/IP data transfer  is very slow

243859 Newbie
Currently Being Moderated
Hi, i've encountered a disturbing problem with OVM 2.2.2.

My dom0 network setup (4 identical servers):

eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's

Besides bonding setup, it's default OVM 2.2.2 installation.

Problem description:
TCP/IP data dransfer speed:
- between two dom0 hosts: 40-50MB/s
- between two domU hosts within one dom0 host: 40-50MB/s
- between dom0 and locally hosted domU: 40-50MB/s
- between any single domU and anything outside it's dom0 host: 55KB/s -
something is definitely wrong here.

domU network config:
vif = ['bridge=xenbr0,mac=00:16:3E:46:9D:F1,type=netfront']
vif_other_config = []

I have similar installation on Debian/Xen, and everything is running
fine, e.g. i don't have any data transfer speed related issues.

regards
Robert
  • 1. Re: Oracle VM 2.2.2 - TCP/IP data transfer  is very slow
    Avi Miller Guru
    Currently Being Moderated
    rdenis wrote:
    eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's
    TCP/IP data dransfer speed:
    The ixbe drivers need a larger tx queue length set in Dom0 from the default in OVM2. Add the following line to /etc/modprobe.conf on your Oracle VM 2.2.2 Servers:
    options netbk queue_length=1000
    And reboot the servers. This will set all the VIF queue lengths to 1000 to match the physicals, up from the default of 32, which is way too small for 10Gb NICs.
  • 2. Re: Oracle VM 2.2.2 - TCP/IP data transfer  is very slow
    821215 Newbie
    Currently Being Moderated
    There is also an issue with the ixgbe driver in the stock OVM2.2.2 kernel (bug:1297057 on MoS). We were getting abysmal results for receive traffic (measured in hundreds of kilobytes!!! per second at times) compared to transmit. It's not exactly the same as your problem, so don't blindly follow what I say below!!!

    --------------------------------------------------------------------------------
    ### "myserver01" is a PV domU on Oracle VM 2.2.2 server running stock kernel ###
    [root@myserver02 netperf]# ./netperf -l 60 -H myserver01 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver01.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec

    87380 16384 16384 60.23 1.46

    ---------------------------------------------------------------------------
    ### Repeat the test in the opposite direction, to show TX is fine from "myserver01" ###
    [root@myserver01 netperf]# ./netperf -l 60 -H myserver02 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver02.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec

    87380 16384 16384 60.01 2141.59

    In my case, a workaround as advised by Oracle Support is to run:
    ethtool -C eth0 rx-usecs 0
    ethtool -C eth1 rx-usecs 0

    against the slaves within your bond group. This will give you better performance (in my case, got up to ~1.2GBit/s), although there are some fixes coming out in the next kernel which get even better speeds (in my tests, ~2.2GBit/s):

    Edited by: user10786594 on 11/09/2011 02:22

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points