-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"mount(2): Operation not permitted" in plain docker installation #42
Comments
After adding Here's my adapted export.txt:
and my adapted docker-compose.yml (I've only added portmappings and got rid of the version restrictions):
and this is the output for different client-versions, depending on which mountpoint I try:
when I use
In conclusion, permission checks of NFS v4 are more strict and I hadn't mapped port 111 to be able to use NFS v3. For v4 I assume I'll have to add user id mapping to make it work without the insecure-flag. |
Unfortunately I have been getting nowhere today, trying to get this working with AUTH_SYS and idmapd. Some guidance would be greatly appreciated :-/ I've added idmapd.conf, I've created users on the server with the same UID/GID and name as the client, but I still get
Here's my imapd.conf:
Here's the server's DEBUG log:
|
Taking a look now. It'll take me a moment to digest your issue. Thanks for posting your debug logs - that's super helpful! Stand by. |
Couple of clarifying questions:
NFSv4 doesn't actually require Here's an updated nfs-server:
image: erichough/nfs-server
ports:
- 2049:2049
- 2049:2049/udp
- 111:111
- 111:111/udp
- 32765:32765
- 32765:32765/udp
- 32767:32767
- 32767:32767/udp
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_VERSION: 3
NFS_LOG_LEVEL: DEBUG |
That's fine for now.
Yes, it's my Ubuntu workstation. I've stopped and disabled AppArmor now (And rebooted to make sure nothing is loaded).
No, it's on the host machine.
I've adapted docker-compose similar to the one you've posted:
and the server is not starting now:
When I had this issue yesterday, I went through the apparmor-config to get it running. But apparmor is definitely disabled now:
|
Hmm, that error message is certainly the one that usually accompanies AppArmor interference. But what's also weird is that even though AppArmor is clearly disabled on your host. I'm starting to wonder if this is an as-of-yet undiscovered bug in this image. Can you think of any other security controls on your system that might be interfering with I would still like to see the output of your client-side Do you have a secondary, independent host on which you could easily test? That might be useful in helping to give us an idea of where the problem lies. Thanks for your patience in figuring this out. As long as you are, I'm happy to keep digging to get at the root of the problem. I think we'll get it solved! |
Well the server is not starting with this config, so no mounting :-/ The host machine is a vanilla Ubuntu 18.04.4, no SELinux installed as far as I know. I have another pure Debian machine, I will configure it for docker and test the setup there, will post when I've done that ( |
FYI: This the mount-output on my workstation (The Ubuntu-box) when I just add the apparmor-stuff to the simplified config, not sure if this helps:
This is the accompanying
And I've loaded the apparmor-profile:
|
What's the output of:
and (if you have netcat installed)
I'm just wondering if we're dealing with a networking issue. |
These commands look good, having started above setup with apparmor on and the 'erichough-nfs'-profile loaded
Sorry I've not been able to try it on a different box yet. The 32bit-box I mentioned doesn't have virtualization-capabilities/no docker, and the servers I have access to are all virtualized instances. My laptop has the same setup, Ubuntu 18.04.4 and same kernel as my workstation, doubt there will be much difference, but I'll try today to see if it's an arbitrary issue with my workstation. |
I didn't redact the log but I lost the debug-flag on the way, sorry about that! :-/
and here with apparmor & profile loaded:
And here's the client-log for v3-only:
|
So I have now installed Debian Buster on a second computer. The results are the same as far as I can tell. I can't run it without AppArmor either btw ("mount rpc_pipefs permission denied"). Weirdly, I also had to map the docker-image to port 112, as Debian insists on using rpc-statd to start the client, so port 111 is occupied by rpcbind. With the Here's the nfs-server output on the Debian machine (With the docker-compose.yml you posted above - NFSv3 only, but apparmor and profile loaded. Without apparmor it fails with the "mount rpc_pipefs permission denied"-error, like on my other machine):
Here's the mount-log:
Let me know if there's anything else I can try. |
Networking looks good to me, and I don't think AppArmor is to blame.
I've bumped into that once or twice in the past. Probably not related to our issue.
That's our best clue so far, but it would be the first time I've ever seen NFSv4 work but not NFSv3! This might be worth trying to unravel a little more. In the last debug output you posted, I see that the server is still using
Does your copy of One other thing to check is the filesystem(s) of both your NFS share directory on the host ( Possibly stupid question. Are you able to perform unrelated, non-NFS mounts on this machine? e.g. manually mounting a hard drive, or a FUSE mount, or a bind mount? Just still trying to figure out if this is the OS messing with us, or a problem with NFS. |
Thank you for bearing with me! Mount output of
Filesytem is BTRFS on the Ubuntu machine and Ext4 on the Debian one, both directories are on the same filesystem. I've mounted sshfs (=fuse) and a bind-mount on the mountpoint without issues 🤷♂️ The port-weirdness and the fact that NFSv3 insecure does work on Ubuntu but not on the Debian machine made me think it might be a client-issue after all, so I setup the server to listen to their LAN-IPs and mounted stuff cross-machine. The behaviour was exactly the same. With the After switching from localhost (127.0.11.20) to the LAN-IP, I'm able to mount the Ubuntu-server both locally and across the machines with both versions and Except for the IP and port 111 vs. 112, they now have an identical configuration:
Here's their respective output: Ubuntu
Debian
I have compared the logs line by line and they are identical. Unfortunately I need a setup where I can run the server on 127.0.11.20. Do you have any further ideas with this new development? |
This certainly feels like it's related to networking and the version: '3'
services:
nfs-server:
image: erichough/nfs-server
ports:
- 10.0.0.92:2049:2049
- 10.0.0.92:2049:2049/udp
- 10.0.0.92:111:111
- 10.0.0.92:32767:32767
- 10.0.0.92:32767:32767/udp
- 10.0.0.92:32765:32765
- 10.0.0.92:32765:32765/udp
... Out of curiosity, is there any reason why you are being explicit with the IP in these port listings? Shouldn't make a difference, but might be worth trying ditching the IP just to see if anything changes. Double check that your AppArmor profile is the one specified in the docs? Anything interesting show up in |
I wonder... What's below:
My client: (Emperor)
My Server (Magellan)
Mount attempts from the client
Docker:
Start the nfs-server container:
Log file: All seems fine
Docker ps:
rpcinfo: (Issue #41?)
the output from Netstat:
ifconfig: None of the veth0... adapters show an ipv4 address
Container ifconfig - doesn't show an ipv6 address
|
I had a similar problem. Turned out that on my OpenStack the new Ubuntu 20.04 was with |
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-server-volume
spec:
storageClassName: local-storage
capacity:
storage: 2Gi
accessModes: ["ReadWriteOnce"]
hostPath:
path: "/Users/brandonros/Desktop/nfs-server-volume"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nfs-server
spec:
replicas: 1
serviceName: nfs-server
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
containers:
- name: nfs-server
image: registry.hub.docker.com/erichough/nfs-server:2.2.1
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
- SYS_MODULE
env:
- name: NFS_EXPORT_0
value: "/mnt *"
ports:
- containerPort: 2049
volumeMounts:
- mountPath: /mnt
name: nfs-server-volume
- mountPath: /lib/modules
name: lib-modules
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
volumeClaimTemplates:
- metadata:
name: nfs-server-volume
spec:
storageClassName: local-storage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
|
I had the same problem, the only way to solve it in order to use the secure option in the exports, was to change the network mode from bridge to host mode in docker. I am using docker-compose, I just added network_mode: host in my compose file. |
I'm getting
mount(2): Operation not permitted
when I try to mount the nfs-share.I've adapted apparmor and added
cap_sys_admin
for my current user (Which you mentioned in the linked issue).Since I only have a very limited idea what this whole capability-thing is, I've followed some stackoverflow-questions and added
cap_sys_admin benke
in /etc/security/capability.conf as well as puttingauth optional pam_cap.so
in/etc/pam.d/su
(Although, while it seems to have worked, I guess this is probably not the right place, as I don't understand howsu
comes into this). In any case, after adding these changes,capsh --print
for the user running the docker-container containscap_sys_admin+i
inCurrent
:However, this didn't fix the issue, nothing has changed. I hope you can help me out here, as I'm in the dark how this is supposed to work.
This is the full debug-output when trying to mount
Here's the server output:
And this is my docker-compose.yml
The text was updated successfully, but these errors were encountered: