Sorry I changed my E-Mail address and actually they are not able to move my history from User "SPA2" to this one.
I do a setup with the OVS early access version 3.2.1. After this setup I use a RAC template and after this I would do a deploycluster(.py) Setup by script.
My Server has two network cards one for the maintenance, Cluster heartbeat and also for virtual machines. The second network card is for the RAC interconnect eth1.
So I setup eth0 which is setup by the server as bond0.
By config the OVM manager I do myEth0 as 172.16.0.x and myEth1 as 10.1.1.x network. So far so good.
After loading the RAC templates and change of the network parameters before doing the clone template job I configure reh racnode.0 and racnode.1 by setting up networks as
myEth0 and myEth1. Also this works fine.
Then I start the deploycluster.py with the parameters -u admin -p xxxxx -M racnode.0,racnode.1 -N netconfig.ini and here with -D (for dryrun).
Now I saw on the OVS Server that a interface was created on the 172.16.0.20 (which is the server itself) and 172.16.0.151 for the first public IP of RAC node.
But this was created as bond0:0 and is created after first dryrun of deploycluster.py script.
First of all I do the deploycluster.py ... -N netconfig.ini -B yes (which mean buildcluster = yes) but if I start this there will be no cluster build. Nothing happend!
While both VM's racnode.0 and racnode.1 are started I take a look to the first console.
And here I saw a conflict during the boot of racnode.0 cames a message the IP address 172.16.0.151 is in use!
I don't understand why but I saw on the server that the bond0 interface starts bond0:0 with this IP address.
Also it is not possible to do a ifconfig bond0:0 down while it is directly restarted by the server.
I try to start the /u01/racovm/buildcluster.sh but it also stops with network conflict and unknown hostname.
It seems that it is not possible to use the first network interface also for "virtual machines".
And here the questions:
- How can I do a network setup for a RAC Cluster without using the first network card in the server eth0. It seems to be that it does not work correctly.
Is it a 3.2.1 problem?
- For me it looks like to setup a VLAN with for example 2 virtuell networks vlan1 for 172.16.0.x (RAC public and vip and scan) and vlan2 for 10.1.1.x (RAC-priv interconnect)
- How can I configure this and who had a best practice for this?
- Why does the deploycluster.py script not build the cluster?
I started the deploycluster.py script on the Oracle VM Manager machine which is a separate machine.