This content has been marked as final. Show 7 replies
That's exactly what I've been fighting with tonight. Except my client was another solaris box. Server: Solaris 11, Client: Solaris 10. In the end I made it work by doing it the more traditional way:
share -F nfs -o rw=10.1.1.10 /rpool/data
I have a fresh install.. wanted to set up a share to move data back to it. Right out of the box, following several how-tos (which all essentially said the same thing), I couldn't get past the permission denied problem, until I used 'share'. Seems like the zfs set method isn't quite behaving. It's such a neat feature, I was disappointed it wasn't working for me.
hmm, I tried that but it didn't work either. In fact it wouldn't even mount.
After re-reading your post, It sounds like a permissions thing, actually .. the ID on the client needs to have execute permissions on the directory in order to CD into it. Is it world +x ? If not, does your uid on the client side match something that can execute a cd into that directory on the server side? To test, you could try and chmod 777 the directory on the server, long enough to see if your able to cd into it from the client.
yeah doing a chmod 777 to the pool/share did the trick. only issue then is that it created files with permissions with the UID and GID of my OSX system rather than my solaris user "nas". I guess thats normal way NFS works. I've never used NFS before so its all new to me. Is there a way to force it to a particular UID and GID?
Well, the completely lazy way would be make your client UID and your nfs UID the same (change one or the other).
But if you don't want to cheat, I think you can get there with map_static:
This option enables static mapping. It specifies the name of the
file that describes the uid/gid mapping, e.g.
The file’s format looks like this
# Mapping for client foobar:
# remote local
uid 0-99 - # squash these
uid 100-500 1000 # map 100-500 to 1000-1500
gid 0-49 - # squash these
gid 50-100 700 # map 50-100 to 700-750
Is this done on the server or client? If on the server where do I configure this?
Yup, on the server. Create a file called /etc/nfs.map, populate it something like this:
# remote local
gid 123 1234
uid 234 2345
Then in the export options, include map_static=/etc/nfs.map.
For example: share -F nfs -o rw=192.168.1.123,map_static=/etc/nfs.map
Note that you'll need a unique map for each export you do. And if you enounter issues, try just exporting to the ip address instead of the subnet. (if you don't already)