-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow usage of KubeletConfiguration next to JoinConfiguration on kubeadm join #1682
Comments
the tricky part here is how to preserve node-specific settings during upgrades... |
Allowing for customization on a per-node basis - yes, allowing for a replacement - probably no. |
I'm +1 on merging this with the "Advanced configurations with kubeadm (Kustomize)" effort (customization on a per-node basis) As an alternative, if we prefer to keep the scope of kustomize limited for the first iterations, it should be provided a way to specify the ConfigMap we should read from, and keep track of this with a new node annotation. |
My bet is to push in the Kustomize direction. I don't think, that there are many users who push for such kind of feature and, therefore, having the delay (necessary for Kustomize) is acceptable. |
Without this you can customize kubelet per node but it requires using flags which is problematic given that many are deprecated. |
^ the NodeGroup ConfigMap thing makes sense for persistent configurations. I don't think that lack of persistent config should prevent people from overriding the kubelet config at the local level on Join/Upgrade. If a user can distribute a JoinConfiguration to every node, they can manage their KubeletConfigurations. Supporting this is as easy as respecting the KubeletConfiguration GVK when it's passed with |
I'm -1 to the --config options because it makes complicated all the automation around join/upgrades.
|
It continues to confuse kind users that this does not work. Being able to patch Kubelet flags per node only is a terrible place to be in with things moving to component config. I don't think "t-shirt sizes" is relevant at all, I could pick a different "t-shirt size" for every single node... kubeadm should enable users to do what they need. |
might be a good idea to document this until the solution is available. |
how are other kubeadm users doing e.g. node labels? what about cluster API machinesets? I would guess almost everything is relying on patching the flags, which is not a viable path forward as the rest of the project pushes component config. If we're against supporting this in kubeadm I'm inclined to simply provide higher level kubelet config patching and steer users this way, there are many reasonable things to want to configure more granularly than cluster-wide. |
for the time being the kubeadm developers are busy with higher priority tasks.
AFAIK, for workers, nowadays users (including Cluster API) mostly pass custom flags in JoinConfiguration -> ... kubeletExtraArgs. |
Er to be clear I'm not asking anyone to sign up to build this right now,
but I'm reading the above discussion as we won't enable this *ever* in
which case we'll need to better enable alternatives.
Agree on kubelet redesign. GKE is an old example of this not being cluster
wide. I'm guessing there it is done via CCM/GCM.
…On Thu, Mar 19, 2020, 12:27 Lubomir I. Ivanov ***@***.***> wrote:
If we're against supporting this in kubeadm I'm inclined to simply provide
higher level kubelet config patching and steer users this way, there are
many reasonable things to want to configure more granularly than
cluster-wide.
for the time being the kubeadm developers are busy with higher priority
tasks.
in fact, the last few weeks "kubeadm developers" == me, mostly.
how are other kubeadm users doing e.g. node labels? what about cluster API
machinesets?
AFAIK, --node-labels and --node-ip are CLI flags only and are not present
in the KubeletConfiguration (because they are not "cluster-wide"). like
i've mentioned today in the kind ticket, next to kubeadm supporting this
per node, the kubelet needs a bit of a redesign.
for workers nowadays users (including Cluster API) mostly pass custom
flags in JoinConfiguration -> ... kubeletExtraArgs.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1682 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHADK6JIDQC4PG4XM7CDC3RIJW27ANCNFSM4IFZXKAA>
.
|
people like mtaufen and rosti in WG Component Standard are working on instance specific component config. maybe we can enable the KubeletConfiguration next to JoinConfiguration for |
I use a configmap named kubelet-config- as kubelet |
k/k PR is up: |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Q for 1.28, with a lazy consensus until 1.28 releases.
this feels sufficient support for this functionality for the time being. what do folks think? proposing to close. in the future we could allow the user to pass a KubeletConfiguration on it also has one weird UX side effect. if the user passes KubeletConfiguration on if we add the KubeletConfiguration on i recall @fabriziopandini had one idea to implement a "node selector" for a set of KubeletConfiguration objects passed on my vote goes for us to keep the patches as the main functionality for node specific kubelet config. |
[This should not be a blocker] Noting that this may have some weird side effects for kind. Currently we pass in an identical yaml document bundle with all objects on all nodes, however users may also supply per-node patches. I intended to switch us to patching cluster components vs kubeadm config differently and updating the API to reflect this but it hasn't happened yet. If this is happening in 1.28 I may want to prioritize reworking kubeadm config patches in kind. Am I reading correctly that this would be in 1.29 at the earliest? |
my comment about the lazy consensus was about potentially closing this ticket once 1.28 releases and if we haven't decided if we really want a KubeletConfiguration on
i don't recall how kubeadm patches work in kind currently. i think it's similar to CAPI.
then /foo/bar can be mounted on the nodes and it has the patch files. |
It's not the same as CAPI because it's older than kubeadm supporting built-in patching. KIND allows inline snippets in the kind config and then patches a manifest contianing Init/Join/Kubelet/KubeProxy. I read this backwards, as deciding by 1.28 release if you want to go forward and add this. Not adding this would not create any additional confusion then, what KIND probably should be doing is letting user target the kubeadm built in patches, or applying the existing kind patching routines to the generated kubelet config on each node instead of the input to join. Currently attempting to patch kubelet config on a secondary node doesn't work how you might expect with the kubeadmConfigPatches support in kind. |
I am incline to exclude from v1beta4 feature list since this is not decided yet, patch is the way to support the node specific kubelet config for the time being. Folks, any more comments for this feature request? |
as per @neolit123 comment
related to this KEP:
kubernetes/enhancements#1439
1.25 tracking of kubeadm patch support for kubeletconfiguration:
sig-cl/kubeadm/1739: update KEP for support of patching kubelet config enhancements#3312
kubeadm: add support for patching a "kubeletconfiguration" target kubernetes#110405
kubeadm: ensure kubelet config patch results are in YAML kubernetes#110598
kinder: make sure worker nodes also get a patches dir created #2706
kinder: add test for patching kubeletconfiguration; cleanup test jobs #2707
kubeadm: remove jobs for the "patches" functionality for N-x versions test-infra#26557
kubeadm: document the option to use kubeletconfiguration patches website#34259
The text was updated successfully, but these errors were encountered: