-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job cache exhausting inodes #54924
Comments
I have mostly run 2018.3.x, having reverted after trying the 2019.2.0 release. |
There might not be much that can be done since the You also might be able to create another filesystem w/ more inodes on a ramdrive or something and mount it at that path. |
Except there should be less than 1000 jobs being retained with my setup.
The oldest file is from 2019-10-10 10:14:57 (24h ago) |
After some random sampling I think this may be caused (or at least exacerbated) by #54941. |
It still might make sense to nest the jobs in a deeper directory structure. |
would use even more inodes, no? |
After upgrading to 2019.2.2 (and waiting 24h) the usage is much reduced.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue. |
Description of Issue
On a system with eight minions running
state.apply
every 15 minutes, the/var/cache/master/jobs
tree used up the majority of the inodes on the file system within a year.I'm afraid I didn't save any info about the files before deleting them.
Setup
All jobs config is the defaults
Versions Report
The text was updated successfully, but these errors were encountered: