Discussions
Categories
- 17.9K All Categories
- 3.4K Industry Applications
- 3.3K Intelligent Advisor
- 63 Insurance
- 535.7K On-Premises Infrastructure
- 138.1K Analytics Software
- 38.6K Application Development Software
- 5.6K Cloud Platform
- 109.3K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 71K Infrastructure Software
- 105.2K Integration
- 41.5K Security Software
Oracle VM Manager can't mount File Server

Hello guys. I am pretty new to Oracle, and I am trying to learn it the best I can
So what I'm trying to do is I am trying to create a cluster in Oracle VM Manager. I have installed Oracle Linux 7.5 and OVS 3.4 (both on Virtual Box) and now what I am trying to do is I am trying to create a NFS on my Oracle Linux 7.5. So what I did was I followed some tutorials (because I have no idea what to do) and what they said is to create for example 1 directory /public and 2 directories inside of it for example nfs1 and nfs2. Then you add them to /etc/exports as /public *(rw,sync) and restart the rpcbind and nfs services. But when I try to add File Server in the Oracle VM Manager it gives me this error on Refresh file server:
|
|
|
Refresh File Server: nfsserver | |
![]() | |
Answers
-
Hi,
can you please clarify, how you did setup your hosts? Which host has OVMM installed, where is the NFS export running and what host is your OVS host? From the log it seems, that you're trying to mount the NFS export from 172.16.0.5, where there doesn't seem to be a NFS server running.
Also, NFS uses some random highports, so to make it really easy - but unsafe at any stretch, disable the firewall on the host where the NFS exports are running on.
Cheers,
budy
-
It'd really help us all if you could provide hostnames, IP addresses, and function (e.g., manager, storage, and compute [i.e., hypervisor/host]).
For example: My home lab...
Storage1 [eth0] 172.16.0.1
[eth1] 172.16.1.1
Storage2 [eth0] 172.16.0.2
[eth1] 172.16.1.2
Manager1 [eth0] 172.16.0.3
[eth1] 172.16.1.3
Compute1 [eth0] 172.16.0.4
[eth1] 172.16.1.4
Compute2 [eth0] 172.16.0.5
[eth1] 172.16.1.5
(I've omitted the function of each host because the hostname makes it self-evident, IMO. What isn't self-evident: The Red_network 172.16.0.[1-5] is management traffic. The Green_network 172.16.1.[1-5] is for storage traffic.)
And then: Verify that each of the hosts can reach each of the other hosts' interfaces on the same segment (i.e., 172.16.0.1 can reach 172.16.0.[2-5] and 172.16.1.1 can reach 172.16.1.[2-5]). (FWIW: If one host can reach all hosts then most likely all hosts can reach all hosts so there's no reason to perform this test twenty times [in an environment with five hosts].)
If that all works: Then verify that your manager and your compute nodes can access the storage hosts' NFS exports (i.e., showmount -e 172.16.0.1) using your storage network if you have created one - Otherwise just use the singular network that exists. If you're not able to access the storage hosts' NFS exports:
- Disable SELinux on all hosts.
- Disable iptables on all hosts.
- Reboot the storage nodes.
- Try again.
HTH,
Eric Pretorious
Portland, Oregon