This discussion is archived
9 Replies Latest reply: Sep 24, 2013 8:45 AM by 998369 RSS

Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond

984927 Newbie
Currently Being Moderated
Hi,
We are attempting to evaluate the early access OVM 3.2. The servers install OK, as does the manager. When we create vlan groups and allocate IPs, the relevant bridges are created and at this point networking appears to fall apart with ~90% packet loss when attempting comms, even with the other server on the same vlans. Removing the bridges and implementing the IP addressing on bond0.<vlan> appears to work just fine. Any ideas?

TIA

Julian

[root@ormovmprd01 ~]# ping 10.18.5.142
PING 10.18.5.142 (10.18.5.142) 56(84) bytes of data.
64 bytes from 10.18.5.142: icmp_seq=1 ttl=64 time=1.10 ms
^C
--- 10.18.5.142 ping statistics ---
22 packets transmitted, 1 received, 95% packet loss, time 21000ms
rtt min/avg/max/mdev = 1.101/1.101/1.101/0.000 ms


Network config:

[root@ormovmprd01 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 250
Up Delay (ms): 500
Down Delay (ms): 500

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 33
Partner Key: 33393
Partner Mac Address: 00:23:04:ee:be:14

Slave Interface: eth4
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 2c:76:8a:82:de:b8
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 2c:76:8a:82:72:c8
Aggregator ID: 1
Slave queue ID: 0

[root@ormovmprd01 ~]# ifconfig -a
0004fb00106d6d9 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
inet addr:10.18.5.141 Bcast:10.18.7.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:172325121 errors:0 dropped:0 overruns:0 frame:0
TX packets:17219 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:141811819520 (132.0 GiB) TX bytes:792710 (774.1 KiB)

0a120000 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
inet addr:10.18.1.141 Bcast:10.18.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:28437188 errors:0 dropped:786432 overruns:0 frame:0
TX packets:1102456 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1762438784 (1.6 GiB) TX bytes:212902565 (203.0 MiB)

0004fb001081b09 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:54396539 errors:0 dropped:0 overruns:0 frame:0
TX packets:17196 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6474801572 (6.0 GiB) TX bytes:791016 (772.4 KiB)

bond0 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:484204171 errors:0 dropped:150901 overruns:14 frame:0
TX packets:1280693 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:420738079632 (391.8 GiB) TX bytes:236868371 (225.8 MiB)

bond0.21 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:54396539 errors:0 dropped:0 overruns:0 frame:0
TX packets:17196 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6474801572 (6.0 GiB) TX bytes:791016 (772.4 KiB)

bond0.300 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42367562 errors:0 dropped:0 overruns:0 frame:0
TX packets:1102472 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4666734277 (4.3 GiB) TX bytes:212905809 (203.0 MiB)

bond0.301 Link encap:Ethernet HWaddr 2C:76:8A:82:DE:B8
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:172325121 errors:0 dropped:0 overruns:0 frame:0
TX packets:17219 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:141811819520 (132.0 GiB) TX bytes:792710 (774.1 KiB)
  • 1. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    user12273962 Pro
    Currently Being Moderated
    What type of 10GB cards are you using? Also, what type of switch?

    Don't know if you're trying to add the default bond0 to a tagged VLAN after you've installed the server.... but its not supported.

    During the installation of the Oracle VM Servers you configure a management interface on each server. These interfaces are added to the default management network when the servers are discovered by Oracle VM Manager. During server installation you have two configuration options for the management network interface: standard or as part of a tagged VLAN.
    Caution

    The only supported method to obtain a management network on a tagged VLAN is to specify the VLAN tag during the installation. For more information, see Installing Oracle VM Server in the Oracle VM Installation and Upgrade Guide.

    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-network-managemnt.html
  • 2. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    984927 Newbie
    Currently Being Moderated
    Hi,

    Thanks for your helpful reply.

    The switches are Cisco Nexus 5596, and the cards are Emulex OneConnect cards. The tagged vlan was indeed added at installation time.

    lspci
    <>
    0f:00.0 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (rev 02)
    0f:00.1 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (rev 02)
    <>

    [root@ormovmprd01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0.300
    #This file was dynamically created by OVM manager. Please Do not edit
    DEVICE=bond0.300
    HWADDR=2C:76:8A:82:DE:B8
    BOOTPROTO=none
    ONBOOT=yes
    VLAN=yes
    ETHTOOL_OFFLOAD_OPTS="lro off"
    BRIDGE=0a120000
    NM_CONTROLLED=no

    [root@ormovmprd01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-0a120000
    #This file was dynamically created by OVM manager. Please Do not edit
    DEVICE=0a120000
    TYPE=Bridge
    IPADDR=10.18.1.141
    NETMASK=255.255.252.0
    BOOTPROTO=static
    ONBOOT=yes
    DELAY=0

    [root@ormovmprd01 network-scripts]# cat meta-bond0.300
    #This file was dynamically created by OVM manager. Please Do not edit
    METADATA=ethernet:0a120000{orm-fe-300}:MANAGEMENT,CLUSTER_HEARTBEAT,LIVE_MIGRATE,VIRTUAL_MACHINE
  • 3. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    user12273962 Pro
    Currently Being Moderated
    I've never personally run those cards before. I do see Oracle UEK2 drivers for those cards. Might try those.

    I do have the 5500 series Nexus switches. Have you enable LACP by issuing the "feature lacp" command? How about the portchannels?

    Almost sounds like you're having a bonding war going on.
  • 4. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    984927 Newbie
    Currently Being Moderated
    Me neither, but it all works just fine if you move the IP config direct onto bond0.<vlan> rather than on the bridges. So I'm tempted to suggest that there probably isn't a driver issue. I'm happy with the Nexus setup:

    [root@ormovmprd01 ~]# cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer2 (0)
    MII Status: up
    MII Polling Interval (ms): 250
    Up Delay (ms): 500
    Down Delay (ms): 500

    802.3ad info
    LACP rate: slow
    Aggregator selection policy (ad_select): stable
    Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 33
    Partner Key: 33393
    Partner Mac Address: 00:23:04:ee:be:14

    Slave Interface: eth4
    MII Status: up
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 2c:76:8a:82:de:b8
    Aggregator ID: 1
    Slave queue ID: 0

    Slave Interface: eth6
    MII Status: up
    Speed: 10000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 2c:76:8a:82:72:c8
    Aggregator ID: 1
    Slave queue ID: 0
  • 5. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    user12273962 Pro
    Currently Being Moderated
    Is the last ouptut the config that's working?
  • 6. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    984927 Newbie
    Currently Being Moderated
    Hi,

    The last config is the config that is working without the bridging, but not with it.
    If I remove the virtual machine tick from the vlan segment in the manager (so the bridge disappears) things get instantly better:

    Without bridging (untick virtual machine in Manager):
    [root@ormovmprd01 ~]# ping 10.18.5.142
    PING 10.18.5.142 (10.18.5.142) 56(84) bytes of data.
    64 bytes from 10.18.5.142: icmp_seq=1 ttl=64 time=1.27 ms
    64 bytes from 10.18.5.142: icmp_seq=2 ttl=64 time=0.211 ms
    ^C
    --- 10.18.5.142 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1001ms
    rtt min/avg/max/mdev = 0.211/0.742/1.274/0.532 ms

    With bridging (tick virtual machine in Manager):
    [root@ormovmprd01 ~]# ping 10.18.5.142
    PING 10.18.5.142 (10.18.5.142) 56(84) bytes of data.
    ^C
    --- 10.18.5.142 ping statistics ---
    182 packets transmitted, 0 received, 100% packet loss, time 180999ms
  • 7. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    942982 Newbie
    Currently Being Moderated
    Try changing it from LACP to Active Backup and see if you get similar results.
  • 8. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    user273487 Newbie
    Currently Being Moderated
    Hello...I'm having a very similar issue to yours - Only difference is that I'm using the Active Backup mode of bonding and that I'm experiencing a 100% packet failure on the bridge. Were you able to find a solution for your problem? I would appreciate some pointers.


    Thanks,
    Manu
  • 9. Re: Network / bridge issue on OVM 3.2 Server when running VLANs on LACP bond
    998369 Newbie
    Currently Being Moderated

    I am having similar issues too... after adding vlans the transfer speed drop to about 10MBps. We have a 1Gbps infrastructure and the transfer speed was around 80MBps before the addition of the vlans.... any ideas why after adding the vlans Oracle VM started communicating at 10MBps (100Mbps)?

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points