I am running Oracle VM 3.1.1 on a server with 4 ethernet interfaces.
bond0 is setup with eth0 and eth1 and bond mode is active/backup. This bond is used by the managed network.
bond1 is setup with eth2 and eth3 and bond mode is dynamic link aggregation. This bond is used by public network.
Managed network channels: Server Management, Cluster Meartbeat and Live Migration
Public network channels: Storage and Virtual Machine
All my 7 virtual machines are installed on separte physical disk using iSCSI.
95% of my network traffic on dom0 is going though eth0, so I belive that iSCSI is not using the public network.
How can I make my VM use bond1 / public network for iSCSI traffic?
The managed nework did have storage assigned when I created the storage connection, but I have rebooted dom0 after applying the settings above. Can this be the problem?
I don't think you can set it up to use the other network for iSCSI and the vdisk stuff. But what you could do is alter the bond0 to a dynamic link and basically increase the bandwidth. We have blogged about it here: http://portrix-systems.de/blog/fbauhaus/oracle-vm-bond0/
Not true. You must create a new network definition under the "networking" tab of the VM manager. Then configure your network ports and check what you want to "run" on this network definition. For example. You can create a subnet definition and call it. "VM-Storage".... then select "storage" and "virtual machine". Then select what servers you want to have access to this definition and what ports that will be used. Then on the default bond definition.... uncheck "storage" and "virtual machine".
THUS.... you segment your traffic. Just remember you need to have your network planned properly. You should have your definition subnetted properly. At least in my opinion you should. If you storage is dedicated to just your VM environment... then create a subnet that isn't routeable from the management subnet. Thus you are assured that you are using using the proper pathing.
As I described in my post the setup you describe is what I got. (My storage network is called public network) Making a subnet now would probebly just make my system go offline, or do you think it will be "forced" to use bond1?
I think I can use Bjoern's solution as it does not matter if iSCSI traffic use bond1 or bond0 as long as it uses Dynamic Link Aggregation for better throughput.
It matters if the traffic is routed or not. "Routing" traffic and "switching" traffic are two different things and the "bandiwidth" rating on "routing" traffic versus "switching" traffic are considerably lower. More takes place when a "packet" is routed than when it just uses layer 2 traffic. This must be taken in consideration when planning traffic between your VM servers and its respective "storage, VMs and etc. Personally, I would never have that traffic "routed". Never. Do it if you like. I wouldn't recommend it. Any time your "hop" to a target... you introduce latency. Maybe your network fabric can handle it now... But what will happen when you start adding to your environment?
Remember the maximumn throughput on a 1 GB connection is 125mbs. Even creating a 2 member bond just gives you 250/mbs. Throw a "hop" in the mix....... I just don't like the numbers. Especially if you're going to run several VM guests on one server.
I feel your pain. Oracle VM can be a complicated product to use if you don't understand its full functionality. If you don't have your system in production.... then change it. Go through the headache now. Oracle VM works very well when it is setup properly. Very well. I just implemented a RAC environment running Oracle's ERP systems for several hundred users. It works great. Haven't had one problem since the migration. Performance is spectacular...
A single switch can have multiple subnets. As long as it is all on the same subnet it should okay if you create a bond that covers all your physical nics. Personally, I would have at least one nic dedicated for the heartbeat traffic. I wouldn't share that traffic with anything else. Just a preference. I can see how links get saturated and the heartbeat might be affected. I've seen this happen with servers that have just one or two nics and under heavy load but a 4 member bond might not have an issue.
We are only using one subnet. I used Bjoern's solution and I am now running bond0 and bond1 with dynamic link aggregation.
To my big disappointment, since the the iSCSI traffic is from one VM server to one storage server, eth1 is always choosen. From what I have found out the CISCO switch use a hash algorithm on source or destination mac address, (or both) and they are always the same for all my iSCSI traffic.
Do this mean that I am stucked with only 1 ethernet port for my iSCSI traffic?
Yes, I have configured LACP on the switch and the etherchannel is listed as Active.
From a Cisco manual: "Use the option that provides the balance criteria with the greatest variety in your configuration. For example, if the traffic on a port channel is going only to a single MAC address and you use the destination MAC address as the basis of port-channel load balancing, the port channel always chooses the same link in that port channel; using source addresses or IP addresses might result in better load balancing. "
My iSCSI traffic is between the VM Server and the storage. Seems that the switch therefore choose the same port every time.