1. How many gateway addresses do you have defined... and where are pinging the guest from? Same subnet or are the packets being routed in any form?
2. Not trying to burst your bubble... but if you only have a single 1GB nic.... then you're not going to be able to run very many VM guests at the same time. Bare minimum, you should have at least 2. The cluster heartbeat should be separated. A single 1GB connection can not pass more than 125mb per sec of data at any given time.
I found that disabling "avahi-daemon" gets rid of the messages in /var/log/messages
I'm still having real problem with network connectivity though. Every 10-15 mins all network activity drops and lose all JDBC, SQLNET, SSH, etc connections from all clients, even connections on same sub-net. For example, I can't even ping the VM server from the host server.
After 60-120 seconds, it all comes back to life and carries on for another 10-15 mins.
I am only expecting to run 1 VM guest on each physical machine and am not planning to do anything like move VMs across physical servers or anything like that. I'm not even using clustering, so would think my network should be up for the task.
Just to clarify: the host network is ok at all times - it's only the VM(s) that suffer from this issue? On each VMS?