Forum Stats

  • 3,723,865 Users
  • 2,244,636 Discussions
  • 7,850,739 Comments

Discussions

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Connectivity error when installing Grid Infrastructure on nodes on Oracle VirtualBox

Laury
Laury Member Posts: 1,636 Silver Badge

Hi,

I have two OEL 7.6 linux machines on Oracle VirtualBox: oel764-oraracn1, and oel764-oraracn2.

From each of these nodes, I can ping the node itself and the other node.

Both nodes have two network adapter and use network bridge adapter.

I am trying to install Grid Infrastructure 19c on it.

When I run runcluvfy utilily as:

./runcluvfy.sh comp nodecon -n oel764-oraracn1,oel764-oraracn2 -verbose

I get these kind errors:

---

Verifying Node Connectivity ...FAILED (PRVG-1172, PRVG-11067, PRVG-11095)

Verifying Multicast or broadcast check ...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"

Checking subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251"

Checking subnet "192.168.122.0" for multicast communication with multicast group "224.0.0.251"

Verifying Multicast or broadcast check ...FAILED (PRVG-11138)

---

---

erifying Node Connectivity ...FAILED

PRVG-1172 : The IP address "192.168.122.1" is on multiple interfaces "virbr0"

on nodes "oel764-oraracn2,oel764-oraracn1"

---

---

oel764-oraracn2: PRVG-11067 : TCP connectivity from node "oel764-oraracn2":

                 "192.168.1.152" to node "oel764-oraracn1": "192.168.1.151"

                 failed.

                 PRVG-11095 : The TCP system call "connect" failed with error

                 "113" while executing exectask on node "oel764-oraracn2"

                 No route to host

oel764-oraracn2: PRVG-11067 : TCP connectivity from node "oel764-oraracn2":

                 "10.10.10.152" to node "oel764-oraracn1": "10.10.10.151"

                 failed.

                 PRVG-11095 : The TCP system call "connect" failed with error

                 "113" while executing exectask on node "oel764-oraracn2"

                 No route to host

---

---

Verifying Multicast or broadcast check ...FAILED

oel764-oraracn2: PRVG-11138 : Interface "enp0s3" on node "oel764-oraracn2" is

                 not able to communicate with interface "enp0s3" on node

                 "oel764-oraracn2" over multicast group "224.0.0.251"

oel764-oraracn2: PRVG-11138 : Interface "enp0s3" on node "oel764-oraracn1" is

                 not able to communicate with interface "enp0s3" on node

                 "oel764-oraracn2" over multicast group "224.0.0.251"

oel764-oraracn2: PRVG-11138 : Interface "enp0s8" on node "oel764-oraracn2" is

                 not able to communicate with interface "enp0s8" on node

                 "oel764-oraracn2" over multicast group "224.0.0.251"

oel764-oraracn2: PRVG-11138 : Interface "enp0s8" on node "oel764-oraracn1" is

                 not able to communicate with interface "enp0s8" on node

                 "oel764-oraracn2" over multicast group "224.0.0.251"

--- 

Does someone has nay indication what the problem can be?

Thanks by advance for any tips.

Laury

Answers

  • Mike Navickas
    Mike Navickas Member Posts: 137 Blue Ribbon
    edited April 2020

    Laury,

    Based on GI setup documentation, multicast should be enabled on network interface that is used for interconnect.
    https://docs.oracle.com/database/121/CWLIN/networks.htm#CWLIN476

    Quick summary on how to enable multicast https://www.thegeekdiary.com/how-to-configure-multicast-on-an-ip-address-interface/

    But you can use other sources as well.

    Regards

    Mike

  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited April 2020

    Hi Mike,

    Thanks for the reaction.

    There is probably something I do not understand.

    1) Is only related to the multicast  configuration?

    3) Why do I need a configuration with multicast?

    3) How can I set this configuration on VirtualBox (when the two nodes are running)?

    Kind Regards

  • Markus Flechtner
    Markus Flechtner Member Posts: 501 Bronze Trophy
    edited April 2020

    As the interconnect does not need access to the outside, please set them to "Host-Only-Network" in the VirtualBox-Configuration.

    And please post your /etc/hosts file and the output of nslookup (if you are using DNS).

    Beside that, there are lots of websites which provide instructions how to set up an Oracle Cluster on VirtualBox-VMs, e.g. https://oracle-base.com/articles/12c/oracle-db-12cr2-rac-installation-on-oracle-linux-7-using-virtualbox

    HTH

    Markus

    Laury
  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited April 2020

    Hi,

    Thanks for the reaction.

    I have changed the interconnect to "Host-Only-Network" for both virtual nodes.

    Well, at that stage I do not have set a DNS server.

    The link you posted mention a DNS installation.

    Should this be done on both nodes?

    This step is not very clear for me.

    Here is the comntent of the /etc/hosts for both nodes:

    ---

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    # Public network - (enp0s3)

    192.168.1.151                   oel764-oraracn1

    192.168.1.152                   oel764-oraracn2

    # Private interconnect - (enp0s8)

    10.10.10.151                    oel764-oraracn1-priv

    10.10.10.152                    oel764-oraracn2-priv

    # Public virtual IP (VIP) - addresses

    192.168.1.153                   oel764-oraracn1-vip

    192.168.1.154                   oel764-oraracn2-vip

    # Single client access name (SCAN)

    192.168.1.160                   oel764-orarac-scan

    ---

    Any other suggestions?

    Kind Regards

  • Markus Flechtner
    Markus Flechtner Member Posts: 501 Bronze Trophy
    edited April 2020

    Hi,

    it should work without DNS, so you can continue.

    The message for the subnet 192.168.122.0 can be ignored, as you are not using this interface.

    ("Checking subnet "192.168.122.0" for multicast communication with multicast group "224.0.0.251"

    Verifying Multicast or broadcast check ...FAILED (PRVG-11138)")

    Did you run "cluvfy" again?

    Regards

    Markus

    Laury
  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited April 2020

    Hi Markus,

    Thanks for your feedback.

    Intersting to hear that I can work without a DNS server.

    Ok, I can ignore:

    Checking subnet "192.168.122.0" for multicast communication with multicast group "224.0.0.251"

    Verifying Multicast or broadcast check ...FAILED (PRVG-11138)

    as I don't use this interface (virbr0).

    But when I run:

    ./runcluvfy.sh comp nodecon -n oel764-oraracn1,oel764-oraracn2 -verbose

    I still get the other errors:

    ---

    Verifying User Equivalence ...FAILED

    oel764-oraracn2: PRVG-2019 : Check for equivalence of user "oracle" from node

                     "oel764-oraracn1" to node "oel764-oraracn2" failed

                     PRKC-1191 : Remote command execution setup check for node

                     oel764-oraracn2 using shell /usr/bin/ssh failed.

                     No ECDSA host key is known for oel764-oraracn2 and you have

                     requested strict checking.Host key verification failed.

    Verifying Multicast or broadcast check ...FAILED

    oel764-oraracn1: PRVG-11138 : Interface "enp0s3" on node "oel764-oraracn1" is

                     not able to communicate with interface "enp0s3" on node

                     "oel764-oraracn1" over multicast group "224.0.0.251"

    oel764-oraracn1: PRVG-11138 : Interface "enp0s8" on node "oel764-oraracn1" is

                     not able to communicate with interface "enp0s8" on node

                     "oel764-oraracn1" over multicast group "224.0.0.251"

    oel764-oraracn1: PRVG-11138 : Interface "virbr0" on node "oel764-oraracn1" is

                     not able to communicate with interface "virbr0" on node

                     "oel764-oraracn1" over multicast group "224.0.0.251"

    ---

    (I just pasted the erros from the prompted log)

    Any other idea about what the problem could be?

    Kind Regatds

  • BPeaslandDBA
    BPeaslandDBA Member Posts: 4,615 Blue Diamond
    edited April 2020

    The two hardest things when setting up Oracle RAC are the shared storage and the private network.

    Here is how I did it with Virtual Box: https://www.peasland.net/2015/04/23/oracle-rac-on-my-laptop-with-virtual-box/

    That blog post is long in the tooth but it still is relevant today.

    Cheers,
    Brian

    LauryLaury
  • Markus Flechtner
    Markus Flechtner Member Posts: 501 Bronze Trophy
    edited April 2020

    Hi,

    one thing:

    as cluvfy checks both nodes, you should setup user equivalance between the nodes before running cluvfy.

    There should be a script called ssh_setup.sh (or similar name) to do the job.

    Regards

    Markus

    Laury
  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited April 2020

    Hi,

    Tnanks for the answer.

    For the time being, my purpose through this post is to "test" the Oracle Grid Infrastructure for Oracle 19c RAC.

    @BPeaslandDBA:

    Thanks for your post, I will read it.

    I will come back to it here or through your page.

    In the meantime I have errors from the check-step just posted below that I would like to solve.

    @Markus:

    Yes, there is a  ssh_setup.sh script, I ran it before to run the Installer, and anyway the does the chech at a given momen.

    I got no error with the ssh configuration.

    In the meantime I went a little bit further.

    I disabled the firewall and stopped the iptables, and I could go on with the installation (maybe not the best option).

    But I got errors with the checks.

    I post here only the ones I could not solve.

    1) resolv.conf Integrity

    2) DNS/NIS name service

    Here the content:

    ---

    1) resolv.conf Integrity

    resolv.conf Integrity - This task checks consistency of the /etc/resolv.conf across nodes.

    Check failed on nodes oel764-oraracn1, oel764-oraracn2

    Verification result of failed node: oel764-oraracn2

    Details:

    - PRVF-5636: The DNS response time for an unreachable node exceeded "15000" ms on following nodes:

    oel764-oraracn1, oel764-oraracn2

    - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes spcified.

    - Action: Make sute that "option timeout", "option attempts" and "nameserver" entries in the /etc/resolv.con are proper.

    On HPUX these entries will be "retrans". "retry" and "nameserver".

    On Solaris these will be "option retrans", "option retry" and "nameserver".

    Make sure that the DNS server responds back to name lookup request within the specified time when looking upt to an unknown host name.

    Verification result of failed node: oel764-oraracn1

    Details:

    - PRVF-5636: The DNS response time for an unreachable node exceeded "15000" ms on following nodes:

    oel764-oraracn2, oel764-oraracn1

    - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes spcified.

    - Action: Make sute that "option timeout", "option attempts" and "nameserver" entries in the /etc/resolv.con are proper.

    On HPUX these entries will be "retrans". "retry" and "nameserver".

    On Solaris these will be "option retrans", "option retry" and "nameserver".

    Make sure that the DNS server responds back to name lookup request within the specified time when looking upt to an unknown host name.

    2) DNS/NIS name service

    DNS/NIS name service - This test verifies that the Name Service lookups for teh Distributed Name Server (DNS)

    and the Network Information Service (NIS) match the SCAN name entries.

    Error:

    - PRVG-11826: DNS resolved IP addresses "for SCAN name "oel764-orarac-scan" not found in the name service returned IP addesses "192.168.1.160"".

    - Cause: The name resolution setup check for the indicated SCAN name failed because one or mroe of the indicated IP addresses otained from

    DNS could not be found in the IP addresses obtained from the name service on the system as configured in "/etc/nsswitch.conf" configuration file.

    - Action: Make sure all DNS resolved IP addresses are present in the IP addresses obtained from the name service on the system

    as configurd in the "/etc/nsswitch.conf" configuration file by reconfiguring the "/etc/nsswitch.conf" for the indicated SCAN name.

    Check the Name Service Cache Daemon (/usr/sbin/nscd) by clearing its cache and restartinig it.

    - PRVG-11827: Name service returned IP addresses "192.168.1.160" for SCAN name "oel764-orarac-scan" not found in the DNS returned IP addresss.

    Cause: The name resolution setup check for the indicated SCAN name failed because one or mroe of the indicated IP addresses otained from

    DNS could not be found in the IP addresses obtained from the name service on the system as configured in "/etc/nsswitch.conf" configuration file.

    - Action: Make sure all name service resolved IP addresses obtained obtained from the "/etc/nsswitch.conf" configuration file are present in the DNS

    resolved IP addresses by reconfiguring "/etc/nsswitch.conf" configuration file for the indicated SCAN name.

    Check the Name Service Cache Daemon (/usr/sbin/nscd) by clearing its cache and restartinig it.

    ---

    It seems to be DNS server issue.

    I understood from you that a DNS server is not a strict necessity.

    Did I understand wrong maybe?

    How can I solve these errors?

    Any ideas about what the problem could be?

    Kind regards

  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited May 2020

    Hi Brian,

    I have read you document up to the Grid Infrastructure installation.

    You wrote "Also, my laptop does not have a DNS server so I will need to use the local hosts file name resolution"

    I do not understand. Can you explain me how without a DBN server you can install Grid Infrastructure for RAC?

    Also, how can the addresses:

    racscan 192.168.56.105

    racscan 192.168.56.106

    racscan 192.168.56.107

    be accessed if there is not correspond to a server?

    IPs 192.168.56.105, 192.168.56.106, 192.168.56.107 are asigned to no device.

    How do you define these addresses?

    I have the same remarks for host01-vip and host02-vip.

    Also you do no not mention the NTP configuration which is - if I understood well the theorie - an imporatant element for the nodes synchronization wihin the RAC architecture.

    Kind Regards

  • eronitascott
    eronitascott Member Posts: 9 Green Ribbon
    edited May 2020

    Useful Information, your blog is sharing unique information....

    Thanks for sharing!!!

  • BPeaslandDBA
    BPeaslandDBA Member Posts: 4,615 Blue Diamond
    edited May 2020
    Can you explain me how without a DBN server you can install Grid Infrastructure for RAC?

    If you do not have DNS, you just put the hostnames and ipaddresses in the local hosts file, typically in /etc/hosts. You need to put them in *all* nodes in the cluster. You do not want to do this for a real-life system, but it works fine for a testbed.

    Also, how can the addresses:racscan 192.168.56.105racscan 192.168.56.106racscan 192.168.56.107be accessed if there is not correspond to a server?IPs 192.168.56.105, 192.168.56.106, 192.168.56.107 are asigned to no device.How do you define these addresses?

    I put the IP addresses in my host file. Ideally they should be in DNS, but for a testbed, as I've stated, a host file is just fine.

    When you install Grid Infrastructure, the OUI asks you for the name of the cluster. It also asks you the name of the VIPs. If uses the cluster name for the SCAN listener name. The OUI uses nslookup to find out the IP addresses for those entities.

    When clusterware, i.e. Grid Infrastructure, is started, it uses ARP to bind those IP addresses to the physical network cards. I talk about that here: http://www.peasland.net/2016/06/07/oracle-rac-vip-and-arp-primer/

    ARP, or Address Resolution Protocol is the "magic" to how VIP failover works.

    Also you do no not mention the NTP configuration which is - if I understood well the theorie - an imporatant element for the nodes synchronization wihin the RAC architecture.

    NTP doesn't do anything with node synchornization. It just makes sure the nodes have the same time. If you want more on how to set up NTP, see this: https://oracle-base.com/articles/linux/linux-ntp-configuration

    Make sure you read all the way through to the end because RAC-specific info on NTP is found towards the end.

    Cheers,
    Brian

    LauryLaury
  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited May 2020

    HI Brian:

    Here, I am just testing and experimenting the RAC architecture.

    Ok, I understand that a DNS and NTP are not a strict necessity for this installation.

    Thanks for the explanations. But I am not sure if I understand everything about the VIPs.

    A client initiates a request through a VIP (the physical address is of course necessary, but that's not the way to acess the node within the RAC architecture).

    The VIP address is directly handled by the GI (Grid Infrastructure), or let's say (when installed), the Clusterware.

    If the node to which that VIP correspond is not available or down, the Clusterware redirects the client request to the next VIP of another node, and if that node is available the client gets the connection.

    The VIPs are only needed by the Clusterware to manage the client connection between the nodes of the RAC cluster.

    Is it the correct mechanism?

    Each VIP is is related to a physical IP.

    Both VIPs and physical IPs belong to the same network.

    SCAN IPs, VIPs and physical IPs belong to the same network.

    The interconnect IPs belong to their own network (a different network) and are used only for inter-node communication within the RAC architecture.

    But in fact, the client intiates a request through one of the SCAN IPs (thtough hostname resolution). How does the mechanism works between the SCAN addresses and the VIP addresses?

    Kind Regards

  • BPeaslandDBA
    BPeaslandDBA Member Posts: 4,615 Blue Diamond
    edited May 2020
    The VIP address is directly handled by the GI (Grid Infrastructure), or let's say (when installed), the Clusterware.If the node to which that VIP correspond is not available or down, the Clusterware redirects the client request to the next VIP of another node, and if that node is available the client gets the connection.The VIPs are only needed by the Clusterware to manage the client connection between the nodes of the RAC cluster.Is it the correct mechanism?

    Let's see if I can shed some light on this.

    You have a node. It has an IP address that is defined in the NIC config. That machine is started up. The OS tells the network switch if you see any traffic to my IP address, route it to the MAC address on my NIC. This is a high-level way of how it works with just a regular old NIC in a machine. Your regular old Oracle Listener uses this same IP address.

    When Clusterware starts up, it tells that same network switch that if you have any traffic for the VIP IP address, route it to the same MAC address on the same host. (I talk about this in that blog post I linked previously. Make sure you read it). Clusterware tells the network switch using something called Address Resolution Protocol (ARP).

    When a node goes down, clusterware on that node is not available. Another node in the cluster that has survived will notice the one node is down. Clusterware on a surviving node will talk to the network switch and say for that one VIP on that host that is down, do not route traffic to that MAC address any more. Instead, route it to the MAC address of another node. This redirection is done at the switch level, not at the clusterware level. Clusterware does get the VIP re-ARP'd to a surviving node and that's it.

    VIPs are needed because of the 10 minute TCP/IP timeout that I talked about in that blog post. Hope you read it...hint/hint/hint.

    Without VIPs, connection requests to a downed node will take 10 minutes before they are redirected to another node.


    HTH,

    Brian

    Laury
  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited May 2020

    Hi Brian,

    Yes, I had read your article before but it was not completely clear for me.

    I do not intend to understand everything about networking, but enough to configure RAC by my own and above to understand what is happening.

    From what I had read before, I understood about that TCP/IP timeout and that Oracle bypassed it by entroducing VIPs (not only Oracle uses these principles).

    But I didn't understand clearly how it was working with RAC.

    Now, on a very high level I understand this is need for the Clusterware and that it is orchestred by the Clusterware.

    Yet, what do you mean with "It has an IP address that is defined in the NIC config"?

    One question more about it:

    Each server has at least one network adapter or network interface or NIC. A server being a piece of hardware has a hardware address called a MAC address.

    The IP address we use when configuring is a network address or software network address.

    As you explained, the OS tells the network switch if you see any traffic to my IP address, route it to the MAC address on my NIC.

    Ok, but the IP address (network address or software network address) might not be always static. How does the network switch handle this?

    In the meantime, I restarted completely and from scratch the installation.

    Node 1: oraracn1

    Node 1: oraracn2

    I have now these addresses in /etc/hosts for both nodes of the cluster as well as the physical host (Windows) on which VirtualBox is installed:

    # Public network (enp0s3)

    192.168.56.114                  oraracn1.localdomain    oraracn1

    192.168.56.113                  oraracn2.localdomain    oraracn2

    # Private network (enp0s8)

    192.168.57.109                     oraracn1-priv.localdomain       oraracn1-priv

    192.168.57.108                  oraracn2-priv.localdomain       oraracn2-priv

    # VIP addresses

    192.168.56.120                  oraracn1-vip.localdomain        oraracn1-vip

    192.168.56.121                  oraracn2-vip.localdomain        oraracn2-vip

    # SCAN addresses

    192.168.56.122                  oraracscan

    192.168.56.123                  oraracscan

    192.168.56.124                  oraracscan

    At that stage, I can ping from anu node to the other one using the public of private hostname.

    I also can ping from the physical host (Windows) to each public and private hostnames.

    I do not have a DNS server.

    Yet, when running gridSetup.sh, I end up with these errors during the prerequisite checks:

    ---

    Prerequisite Checks:

        - resolv.conf integrity

        Verification result of failed node: oraracn1

        Error:

        Details:

        - PRVF-5636: The DNS response time for an unreachable node exceeded "15000" ms on following nodes:

        oel764-oraracn2, oel764-oraracn1

        - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes spcified.

        - Action: Make sute that "option timeout", "option attempts" and "nameserver" entries in the /etc/resolv.con are proper.

        On HPUX these entries will be "retrans". "retry" and "nameserver".

        On Solaris these will be "option retrans", "option retry" and "nameserver".

        Make sure that the DNS server responds back to name lookup request within the specified time when looking upt to an unknown host name.

       

        - PRGV-10048: Name "oraracn1"/"oraracn2" was not resolved to an address of the specified type by name server "192.168.1.1"

        - Cause: An attempt to look up an address of a specified type for the indicated name using the name servers shown did not yield any addresses of

        the requested type.

        - Action: Retry the request providing a different name or querying for a different IP address type.

       

        - DNS/NIS name service

        DNS/NIS name service - This test verifies that the Name Service lookups for teh Distributed Name Server (DNS)

        and the Network Information Service (NIS) match the SCAN name entries.

        Error:

        - PRVG-1101: SCAN mame "oraracscan" falied to resolved

        - Cause: An attempt to resolve specifed SCAN name to a list of IP addresses failed because SCAN could not be resolved in DNS or GNS using "nslookup".

        - Action: Check whether the specified SCAN name is correct, If SCAN name should be resolved in DNS, check the configuration of SCAN name in DNS.

        If it should be resolved in GNS make sure that GNS resource is online.

    ---

    I do not have a DNS server, and I do not use such address 192.168.1.1.

    Can I ignore these errors?

    What about the errors related to the SCAN mame?

    Kind Regard

  • Laury
    Laury Member Posts: 1,636 Silver Badge
    edited May 2020

    Hi,

    Thanks for all the previous feedbacks.

    I managed to have the RAC running on VirtualBox.

    I used: OEL 7.6, Oracle Grid Infrastructure 19cR3, Oracle RDBMS 19cR3, Oracle VirualBox 6.1.

    Here are my results:

    - The installation can be done indeed without a DNS server.

    - The installation can be done indeed without TNP service.

    - I got a more stable system with dynamic addresses for bot the public and private networks

    (my system was not stable with statics IPs)

    - 4,5 GB of RAM is really a minimum to get the system up and running, it takes me about 54 mins to have all resources up and running.

    Of course, this was only a way to explore the installation procedure.

    Kind Regards

  • d2f0bb8c-4219-4664-b801-8d73839b9ddb
    edited August 2020

    Hello.

    I am setting up Oracle RAC, on a Virtualbox 6.1.6. The host machine is Windows 10 and both of the nodes are also windows VMs.
    During the installation of Grid software, I am receiving the following error:

    [INS-20802] Oracle Cluster Verification Utility failed.

    In the log file, the following failures appear:

    INFO:  [Aug 26, 2020 11:22:41 AM] Failures were encountered during execution of CVU verification request "stage -post crsinst".

    INFO:  [Aug 26, 2020 11:22:41 AM] Verifying Node Connectivity ...FAILED

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-1172 : The IP address "192.168.50.105" is on multiple interfaces "Private"

    INFO:  [Aug 26, 2020 11:22:41 AM] on nodes "rac1,rac2"

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-1172 : The IP address "192.168.56.118" is on multiple interfaces "Public"

    INFO:  [Aug 26, 2020 11:22:41 AM] on nodes "rac1,rac2"

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-1172 : The IP address "192.168.56.119" is on multiple interfaces "Public"

    INFO:  [Aug 26, 2020 11:22:41 AM] on nodes "rac1,rac2"

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-1172 : The IP address "192.168.56.110" is on multiple interfaces "Public"

    INFO:  [Aug 26, 2020 11:22:41 AM] on nodes "rac1,rac2"

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-1172 : The IP address "192.168.56.114" is on multiple interfaces "Public"

    INFO:  [Aug 26, 2020 11:22:41 AM] on nodes "rac1,rac2"

    INFO:  [Aug 26, 2020 11:22:41 AM] Verifying Single Client Access Name (SCAN) ...FAILED

    INFO:  [Aug 26, 2020 11:22:41 AM] PRVG-11374 : SCAN "oracle-scan" was not resolved

    INFO:  [Aug 26, 2020 11:22:41 AM] Verifying OLR Integrity ...FAILED

    INFO:  [Aug 26, 2020 11:22:41 AM] rac2: PRVF-5311 : File "C:\Windows\temp\rac2.getFileInfo8532.out" either does

    INFO:  [Aug 26, 2020 11:22:41 AM]       not exist or is not accessible on node "rac2".

    Any hints as where have I made a mistake in the network setup?

    These are the contents of the hosts file in both of the nodes:

    # public

    192.168.56.110 rac1 rac1.mydomain

    192.168.56.103 rac2 rac2.mydomain

    # private

    192.168.50.105 rac1-priv rac1-priv.mydomain

    192.168.50.106 rac2-priv rac2-priv.mydomain

    # virtual

    192.168.56.114 rac1-vip rac1-vip.mydomain

    192.168.56.113 rac2-vip rac2-vip.mydomain

    # SCAN

    192.168.56.117 oracle-scan oracle-scan.mydomain

    192.168.56.118 oracle-scan oracle-scan.mydomain

    192.168.56.119 oracle-scan oracle-scan.mydomain

    Thanks in advance.

  • Laury
    Laury Member Posts: 1,636 Silver Badge

    Not reproducible.Solved by re-installing from scratch.

Sign In or Register to comment.