-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forced remount because options changed when no options changed (2014.7 regression) #18630
Comments
@nvx Thank you for this very helpful bug report! We'll check this out. |
This might be related to #18474 |
@garethgreenaway I don't think so: applying the patch manually didn't fix the issue for me. Besides I believe the option checking should not fail in the first place: as @nvx said: no option changed. |
As a guess I would agree with @nvx it's remounting it because the addr option appears in the list of mounted file systems but not in the mount options specified in the state. There are a few that we flag as "hidden" options, we may have to do the same for the addr option. |
Not sure if "me too"s are useful, but I'm experiencing this as well, and also with an NFS mount. My SLS entry looks like this:
and
but when I run
and then the output includes this:
I'm running version 2014.7.0+ds-2trusty1 from the PPA:
|
Planning to take a look at this today, hoping that some of the fixes I put in that got merged this morning addressed this. Thanks for the reports. |
The fix provided by @garethgreenaway in #18978 changed, but didn't fix the situation for me. ----------
ID: library-storage-mount
Function: mount.mounted
Name: /media/remotefs/library
Result: None
Comment: Remount would be forced because options (bg) changed
Started: 23:35:40.916259
Duration: 73.019 ms
Changes: |
@eliasp Can you include your state? |
@garethgreenaway Sure, sorry… {{ share }}-storage-mount:
mount.mounted:
- name: {{ pillar['storage']['mountroot'] }}/{{ share }}
- device: {{ host }}:/{{ share }}
- fstype: nfs
- mkmnt: True
- opts:
- defaults
- bg
- soft
- intr
- timeo=5
- retrans=5
- actimeo=10
- retry=5
- require:
- pkg: nfs-common The current mount already includes the $ mount | ack-grep -i library
134.2.xx.xx:/library on /media/remotefs/library type nfs (rw,bg,soft,intr,timeo=5,retrans=5,actimeo=10,retry=5) The corresponding entry in
Besides that, I stumbled upon a traceback while working on this. In case the device/resource is currently busy, the mount state will backtrace instead of handling this more graceful. |
Another related case I found here with a CIFS mount: elite-mount:
mount.mounted:
- name: {{ pillar['linux']['remotefspath'] }}/{{ name }}_share
- device: //{{ data.address }}/{{ data.share }}
- fstype: cifs
- mkmnt: True
- opts:
- username={{ data.user }}
- password={{ data.password }}
- _netdev
- soft state.highstate test=True:
Entry in
So there are two issues here:
|
… and another option which causes This is caused by the CIFS state described in my previous comment. |
Is soft a valid flag for cifs? |
Yes, soft is a valid flag which is used by default, so my usage of it in the state is actually redundant but not wrong. On December 31, 2014 3:18:06 AM CET, garethgreenaway [email protected] wrote:
Sent from my Android device with K-9 Mail. Please excuse my brevity. |
Thanks for following up everyone! @nvx with the two pull requests applied, is this issue fixed for you as well? |
I can confirm that #19369 will help with this problem... I have the same problem with nfs mounts that have a "bg" option set. The problem is that these options are not reflected in the proc filesystem. In which release could I see the fix? |
This will be in If you want to test it now, place salt/states/mount.py from Then run Don't forget to remove |
Hmm I updated states/mount.py from the 2014.7 branch and I now get this error:
I tried updating modules/mount.py as well but it didn't help.
This could be an issue in 2014.7 unrelated to the fixing of this issue though. Is there a known-good revision that has this issue fixed that I can test to confirm fix? |
@nvx what OS? distribution? Kernel version? |
Same as described in the initial post. RHEL5 x64
|
Thanks. 2.6.18 obviously doesn't have the superopts bits on the proc file. Will take a look and submit at a fix. |
Was able to duplicate this in a docker instance. Couple questions, are you running it inside a Docker instance? What happens when you run the blkid command on the machine in question? |
@pkruithof Perfect! Exactly what I was seeing as well. the vers option is showing up in the mounted file system as vers=4.0 but when specified in the salt state it's vers=4, salt sees the difference and forces a remount. Looks like a scenario we need to account for, I'll look at a fix later today. |
… not work. Try something else
@garethgreenaway any progress on this by any chance? |
I see this in 2015.8.11
|
Same in 2016.3.3:
|
i had this issue as well but specifying the opts as list helped:
|
I'm having the same issue on 2016.11.2 with the
|
This is causing problems for us as well. My state:
Output:
I tried removing the
This issue seems to have been forgotten. |
@corywright Compare the options in your state with the options that are listed in /proc/mounts. Mount options are a royal pain since they change with what is used when the mount command is issued and what actually ends up being used. I've seen nfsvers=4 translate into nfsvers=4.0 or similar. And I wonder if nolock is one that ends up being hidden in the /proc/mounts file. |
@garethgreenaway Thanks Gareth. I can see the differences there. Is there a solution? Or should the It seems like the documentation currently directs users to implement states that can be problematic in production (unexpectedly unmounting busy nfs volumes during highstates). |
It stands to reason that the forced unmount and mount because options changed in case of "vers=4" vs. "vers=4.0" is counter-intuitive. If |
@shallot the salt module and state module do accept "4" as the value for nfsvers, the issue is on the system side outside of Salt. That "4" can be translated to 4.0, 4.1, etc. so the next time the state runs the values are different and the volume is remounted. |
@garethgreenaway sure, but when the system being modified doesn't actually see the difference between two invocations of such a salt state, then it didn't make sense to remount. I can fathom a situation where someone applies such a salt state with the actual intention of having the mount upgraded to whatever is the latest 4.x version, but it seems too far-fetched to be the default. When the nifty upgrade feature implicitly risks downtime or user data loss, its default should be conservative, to require the user to make that choice explicit. |
Long read! But I can confirm that I have this issue with nfsvers=4.1 which complains and usually fails to do this because my workloads are using the shares! |
Long read too, and I have the issue with the options "ac" of nfs on Centos 6.9 |
Same issue with EFS aka nfsvers=4.1, salt 2016.11.7. |
I got the same problem. I've resolved it by replacing:
with:
Now the remount is not forced every time, and it works.
So, using "vers" instead of "nfsvers" is a good workaround. |
@davidegiunchidiennea That sounds like the solution! We could actually say that this is a good solution and document this somewhere. |
I've got the same problem here with option noauto. (Version 2016.11.4)
State is:
|
same bug here with user_xattr for ext4 fs
Output:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue. |
Just an update, I can still see this issue |
Same here (Salt 3004):
yet wsize was 1048576 all along, and is also reported by /proc/mounts and /etc/mtab |
debian 11 / salt 3004.2 |
Created a new bug report with a proposal for a solution (at least for |
Running a highstate on minions with some NFS mounts results in the mount being remounted every time. This did not occur under 2014.1
Running mount -l shows the following:
I can only assume it's breaking due to the addr option (which is automatically filled by the OS by the looks of it, it was never manually specified as a mount option) or the ordering.
The mount.mounted state looks as follows:
The text was updated successfully, but these errors were encountered: