-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler Addon: Support scaling up from 0 #1481
Comments
@scottyhq thanks, I did see that before opening this issue. That issue is closed and suggests manually adding the tags to node groups. This issue is specifically for having eksctl add the node-template tags when the Cluster Autoscaler addon is enabled ( |
@scottyhq given @cPu1 I would suggest not linking this to the I'd suggest this is a node group level option, since the auto scaler(s) may be in use on some node groups and not other, maybe IAM Service Account Refs:
|
IAM Roles for Service Accounts (IRSA) is a relatively new feature, so we can't expect all users to have started using it. Moreover, it's not supported on EKS versions below 1.13 and not all tools accessing AWS services have been updated to use the newer AWS SDK that added support for it (by including it in the credential chain provider).
That's right, this feature is not for users of IRSA.
This is a node group level option and isn't a new addition to the schema. |
Eh, there are two separate things:
Making 1 and 2 separate options does not require people to use IRSA!
Linking 1 and 2 to the same option makes 2 unavailable to IRSA users. Separate options supports both clusters that don’t use IRSA and those that do. Why are you opposed to two options so both use cases can be supported @cPu1? |
Ah, I misunderstood your point. I agree that eksctl should be able to support the CA addon for users of IRSA, but, IMO, not by requiring non-IRSA users to supply an extra option ( To support IRSA users, a similar level of support as |
I don't think it makes sense to tie this tagging to an IAM setting, regardless of your use of IRSA or not. Only the cluster autoscaler itself needs the elevated permissions. That might be granted with IRSA, or it might be granted to a specific nodegroup, or some other way (kiam). The scaling targets do not need any additional IAM permissions, and the best practice would be to not grant this permission where it is not needed. Edit - Also, I run my CA pod on a nodegroup that doesn't scale. I have min == max so effectively it doesn't scale, but I could see where someone might want to grant the elevated permissions to a particular node group, but not want to have CA enabled on that group. |
Since I just raised #2263 to separately request something like Although that was the same request as #1066, and it was closed with "just do it manually in the config"... |
Just to cross the streams, Managed Nodegroups may get the same behaviour, of copying all the taints and labels from a node onto the ASG. In the Managed Nodegroups case, all nodegroups are automatically tagged for the Cluster Autoscaler already, so effectively |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
Aw, dang. I forgot to comment earlier. >_< |
When performing a scale out, Cluster Autoscaler uses a live node in the ASG to build a template to launch new nodes. These nodes can have Kubernetes labels and taints defined.
When there are no live nodes, however, Cluster Autoscaler has to rely on the tags defined on the ASG as node-template keys to build a template for launching the nodes (see https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#scaling-a-node-group-to-0). These node-template keys must contain the Kubernetes labels and taints added to the corresponding node group in order for Cluster Autoscaler to add them to new nodes.
Problem:
eksctl only adds the tags required by Cluster Autoscaler for auto-discovery, but not the node-template keys.
The text was updated successfully, but these errors were encountered: