Forum Stats

  • 3,816,545 Users
  • 2,259,204 Discussions
  • 7,893,507 Comments

Discussions

Oracle VM Manager can't mount File Server

3718906
3718906 Member Posts: 1
edited Jun 2, 2018 5:27PM in Oracle VM Server for x86

Hello guys. I am pretty new to Oracle, and I am trying to learn it the best I can

So what I'm trying to do is I am trying to create a cluster in Oracle VM Manager. I have installed Oracle Linux 7.5 and OVS 3.4 (both on Virtual Box) and now what I am trying to do is I am trying to create a NFS on my Oracle Linux 7.5. So what I did was I followed some tutorials (because I have no idea what to do) and what they said is to create for example 1 directory /public and 2 directories inside of it for example nfs1 and nfs2. Then you add them to /etc/exports as /public *(rw,sync) and restart the rpcbind and nfs services. But when I try to add File Server in the Oracle VM Manager it gives me this error on Refresh file server:

OVMAPI_B000E Storage plugin command [storage_plugin_listFileSystems] failed for storage server [0004fb0000090000c4785e7010446003] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_listFileSystems to server: oracle.example.com failed. OVMAPI_4004E Sync command failed on server: 172.16.0.1. Command: storage_plugin_listFileSystems, Server error: <class 'OSCPlugin.OperationFailedEx'>:'Unable to access file system at 172.16.0.5: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)\n' [Sat Jun 02 18:22:07 CEST 2018]] OVMAPI_4010E Attempt to send command: storage_plugin_listFileSystems to server: oracle.example.com failed. OVMAPI_4004E Sync command failed on server: 172.16.0.1. Command: storage_plugin_listFileSystems, Server error: org.apache.xmlrpc.XmlRpcException: <class 'OSCPlugin.OperationFailedEx'>:'Unable to access file system at 172.16.0.5: clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)\n' [Sat Jun 02 18:22:07 CEST 2018]

1527956526731
Failure
Failure
admin
Jun 02, 2018 6:22:06 pm
Jun 02, 2018 6:22:06 pm
Jun 02, 2018 6:22:07 pm
453ms
SERVER
false
false

Refresh File Server: nfsserver

Tagged:

Answers

  • budachst
    budachst Member Posts: 1,832
    edited Jun 2, 2018 5:00PM

    Hi,

    can you please clarify, how you did setup your hosts? Which host has OVMM installed, where is the NFS export running and what host is your OVS host? From the log it seems, that you're trying to mount the NFS export from 172.16.0.5, where there doesn't seem to be a NFS server running.

    Also, NFS uses some random highports, so to make it really easy - but unsafe at any stretch, disable the firewall on the host where the NFS exports are running on.

    Cheers,

    budy

  • PDXPretorious
    PDXPretorious Member Posts: 21
    edited Jun 2, 2018 5:27PM

    It'd really help us all if you could provide hostnames, IP addresses, and function (e.g., manager, storage, and compute [i.e., hypervisor/host]).

    For example: My home lab...

    Storage1 [eth0]   172.16.0.1

             [eth1]   172.16.1.1

    Storage2 [eth0]   172.16.0.2

             [eth1]   172.16.1.2

    Manager1 [eth0]   172.16.0.3

             [eth1]   172.16.1.3

    Compute1 [eth0]   172.16.0.4

             [eth1]   172.16.1.4

    Compute2 [eth0]   172.16.0.5

             [eth1]   172.16.1.5

    (I've omitted the function of each host because the hostname makes it self-evident, IMO. What isn't self-evident: The Red_network 172.16.0.[1-5] is management traffic. The Green_network 172.16.1.[1-5] is for storage traffic.)

    And then: Verify that each of the hosts can reach each of the other hosts' interfaces on the same segment (i.e., 172.16.0.1 can reach 172.16.0.[2-5] and 172.16.1.1 can reach 172.16.1.[2-5]). (FWIW: If one host can reach all hosts then most likely all hosts can reach all hosts so there's no reason to perform this test twenty times [in an environment with five hosts].)

    If that all works: Then verify that your manager and your compute nodes can access the storage hosts' NFS exports (i.e., showmount -e 172.16.0.1) using your storage network if you have created one - Otherwise just use the singular network that exists. If you're not able to access the storage hosts' NFS exports:

    1. Disable SELinux on all hosts.
    2. Disable iptables on all hosts.
    3. Reboot the storage nodes.
    4. Try again.

    HTH,

    Eric Pretorious

    Portland, Oregon

This discussion has been closed.