-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set priority for static pods #6897
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @vainu-arto. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
CLA should be fixed... |
/ok-to-test |
For the master pods (apiserver, controller manager, scheduler) this is unlikely to ever matter (the masters aren't expected to run out of resources and need to evict things) but evictions of kube-proxy from worker nodes are easy to trigger in clusters with PodPriority enabled. Since these are static pods the configuration is also somewhat difficult to change.
Again unlikely to matter since master nodes aren't expected to run out of capacity, done mostly for completeness (all pods should usually have a priority defined if the cluster is running with PodPriority enabled).
/lgtm |
/assign kashifsaadat |
/assign @mikesplain |
/assign @justinsb |
This looks good! We actually ran into this recently as well so I'm going to open cherry picks into the 1.13 and 1.14 branches too. Thanks so much for the patience and for the contribution @vainu-arto! /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mikesplain, vainu-arto The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
1 similar comment
/retest |
…97-origin-release-1.14 Automated cherry pick of #6897: Add helpers to set the built-in pod priorities
…97-origin-release-1.13 Automated cherry pick of #6897: Add helpers to set the built-in pod priorities
/retest |
2 similar comments
/retest |
/retest |
@mikesplain thoughts on cherry-picking this to earlier branches? |
…tatic-pods Set priority for static pods
Setting a high priority for kube-proxy is important in clusters that run with PodPriority enabled, otherwise it will get evicted from worker nodes whenever there are unschedulable pods. Also set priorities for the other static pods defined by kops, mainly for completeness.
This should be safe these days since the API always accepts PriorityClassName even if PodPriority isn't enabled.
Fixes #6615.