-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compatibility with aws autoscaler with nodegroup min=0 #1066
Comments
@scottyhq , this can be achieved using tags in your nodegroup config
|
Thanks @adamjohnson01 - just wanted to confirm that the above config works for on-demand nodes. If people are using mixed spot instances scaling from zero requires kubernetes and autoscaler 1.14, which is now out kubernetes/autoscaler#2246 (comment). So I think this issue can be closed! |
When the cluster-autoscaler adds a new node to a group, it grabs an existing node in the group and builds a "template" to launch a new node identical to the one it grabbed from the group. However, when scaling up from 0 there aren't any live nodes to reference to build this template. Instead, the cluster-autoscaler relies on tags in the ASG to build the new node template. This can cause unexpected behavior if the pods triggering the scale-out are using node selectors or taints; CA doesn't have sufficient information to decide if a new node launched in the group will satisfy the request. The long and short of it is that for CA to do its job properly we must tag our ASGs corresponding to our labels and taints. Add a note in the docs about this since scaling up from 0 is a fairly common use case. References: - kubernetes/autoscaler#2418 - eksctl-io#1066
When the cluster-autoscaler adds a new node to a group, it grabs an existing node in the group and builds a "template" to launch a new node identical to the one it grabbed from the group. However, when scaling up from 0 there aren't any live nodes to reference to build this template. Instead, the cluster-autoscaler relies on tags in the ASG to build the new node template. This can cause unexpected behavior if the pods triggering the scale-out are using node selectors or taints; CA doesn't have sufficient information to decide if a new node launched in the group will satisfy the request. The long and short of it is that for CA to do its job properly we must tag our ASGs corresponding to our labels and taints. Add a note in the docs about this since scaling up from 0 is a fairly common use case. References: - kubernetes/autoscaler#2418 - eksctl-io#1066
I'm having trouble scaling up from 0 with spot instances. Is that feature not available? |
Why do you want this feature?
Currently, eksctl examples using the aws kubernetes autoscaler work when at least 1 node is always running. But we'd like to save on costs by scaling from 0 nodes. There are a few extra settings required for this:
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#scaling-a-node-group-to-0
What feature/behavior/change do you want?
The current workaround is to manually add node labels as tags in ASGs. For example in this node configuration
We currently have to manually add the following tags to the corresponding ASG:
Perhaps a flag could be added to propagate labels in the config file to ASG tags when running
eksctl create nodegroups
?related:
#1012 (comment)
#170
The text was updated successfully, but these errors were encountered: