Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kops cluster-autoscaler addon not getting deploy up #11759

Closed
ifosch opened this issue Jun 14, 2021 · 13 comments · Fixed by #11780
Closed

Kops cluster-autoscaler addon not getting deploy up #11759

ifosch opened this issue Jun 14, 2021 · 13 comments · Fixed by #11780
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ifosch
Copy link

ifosch commented Jun 14, 2021

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

I've tried this with both kops 1.19 and kops 1.20.

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

I'm using kubernetes 1.19.9.

3. What cloud provider are you using?

AWS, on EC2.

4. What commands did you run? What is the simplest way to reproduce this issue?

On a cluster provisioned on EC2 using kops with manually created
deployment for cluster-autoscaler, I removed the deploy. Then edit the
cluster definition to include the cluster autoscaler example
configuration from the documentation. I run kops update cluster example.k8s.local --yes and then kops rolling-update cluster example.k8s.local --yes.

5. What happened after the commands executed?

The cluster is rolling updated, but there's no cluster autoscaler pods
not deployment.

6. What did you expect to happen?

I expected to have the cluster-autoscaler deploy created and
corresponding pod up and running on the kube-system namespace.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2020-02-13T14:47:28Z"
  generation: 26
  name: example.k8s.local
spec:
  additionalPolicies:
    node: |
      [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
      ]
  api:
    loadBalancer:
      class: Classic
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    environment: example
  cloudProvider: aws
  clusterAutoscaler:
    enabled: true
    expander: least-waste
    balanceSimilarNodeGroups: false
    scaleDownUtilizationThreshold: "0.5"
    skipNodesWithLocalStorage: true
    skipNodesWithSystemPods: true
    newPodScaleUpDelay: 0s
    scaleDownDelayAfterAdd: 10m0s
    image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.19.1
    cpuRequest: "100m"
    memoryRequest: "300Mi"
  configBase: s3://kops-bucket/example.k8s.local
  docker:
    bridgeIP: 192.168.3.1/24
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-region-1
      name: 1
    - instanceGroup: master-region-2
      name: 2
    - instanceGroup: master-region-3
      name: 3
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-region-1
      name: 1
    - instanceGroup: master-region-2
      name: 2
    - instanceGroup: master-region-3
      name: 3
    memoryRequest: 100Mi
    name: events
  hooks:
  - before:
    - kubelet.service
    manifest: |
      [Service]
      Type=oneshot
      RemainAfterExit=no
      ExecStart=/bin/sh -c "sed -i -e 's/^pool/#pool/g' -e 's/^# pool: .*$/server 169.254.169.123 prefer iburst/' /etc/ntp.conf"
      ExecStartPost=/bin/systemctl restart ntp.service
    name: change_ntp_server.service
    roles:
    - Node
    - Master
  - before:
    - docker.service
    manifest: |
      [Service]
      Type=oneshot
      RemainAfterExit=no
      ExecStartPre=/bin/mkdir -p /root/.docker
      ExecStart=/usr/bin/wget https://amazon-ecr-credential-helper-releases.s3.region-4.amazonaws.com/0.3.1/linux-amd64/docker-credential-ecr-login -O /bin/docker-credential-ecr-login
      ExecStartPost=/bin/chmod +x /bin/docker-credential-ecr-login
      ExecStartPost=/bin/sh -c "echo '{\n  \"credHelpers\": {\n    \"111111111111.dkr.ecr.us-east-1.amazonaws.com\": \"ecr-login\"\n  }\n}' > /root/.docker/config.json"
    name: setup_ecr_docker.service
    roles:
    - Node
    - Master
  - manifest: |
      [Unit]
      Description=Telegraf Container
      After=docker.service
      Requires=docker.service

      [Service]
      TimeoutStartSec=0
      Restart=always
      ExecStartPre=/bin/sh -lc "docker pull 111111111111.dkr.ecr.us-east-1.amazonaws.com/telegraf"
      ExecStartPre=/bin/sh -lc "/usr/bin/docker rm -f telegraf || echo OK"
      ExecStart=/usr/bin/docker run -p 9126:9126 --rm --name telegraf 111111111111.dkr.ecr.us-east-1.amazonaws.com/telegraf

      [Install]
      WantedBy=multi-user.target
    name: telegraf.service
    roles:
    - Node
    - Master
    useRawManifest: true
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeDNS:
    provider: CoreDNS
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 192.168.6.0/25
  kubernetesVersion: 1.19.9
  masterInternalName: api.internal.example.k8s.local
  masterPublicName: api.example.k8s.local
  networkCIDR: 192.168.0.0/20
  networkID: vpc-11111111
  networking:
    calico:
      crossSubnet: true
      majorVersion: v3
      prometheusMetricsEnabled: true
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 192.168.6.0/25
  subnets:
  - cidr: 192.168.9.0/24
    name: region-3
    type: Private
    zone: region-3
  - cidr: 192.168.10.0/24
    name: region-1
    type: Private
    zone: region-1
  - cidr: 192.168.11.0/24
    name: region-2
    type: Private
    zone: region-2
  - cidr: 192.168.8.0/26
    name: utility-region-3
    type: Utility
    zone: region-3
  - cidr: 192.168.8.64/26
    name: utility-region-1
    type: Utility
    zone: region-1
  - cidr: 192.168.8.128/26
    name: utility-region-2
    type: Utility
    zone: region-2
  topology:
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-3
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-3
  role: Master
  subnets:
  - region-3

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-1
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-1
  role: Master
  subnets:
  - region-1

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 8
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: master-region-2
spec:
  additionalSecurityGroups:
  - sg-11111111
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-region-2
  role: Master
  subnets:
  - region-2

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-02-13T14:47:29Z"
  generation: 11
  labels:
    kops.k8s.io/cluster: example.k8s.local
  name: nodes
spec:
  additionalSecurityGroups:
  - sg-11111111
  cloudLabels:
    example.k8s.local/autoscaler/enabled: "true"
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: m5a.xlarge
  maxSize: 3
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - region-3
  - region-1
  - region-2

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

Update: https://controlc.com/ca90e288
Rolling-update: https://controlc.com/58c59773

9. Anything else do we need to know?

I've tried forcing a rolling update, but the result is the same, no
deploy for cluster-autoscaler. The verbose output for the forced
rolling update is too big to push, but I can send them. I've checked
kubelet's logs but found nothing about the cluster autoscaler
manifest.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 14, 2021
@olemarkus
Copy link
Member

Are you able to ssh into a control plane node and run journalctl for a minute? It should contain log lines about applying manifests.

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

I've checked the journalctl. I found nothing explicitly about the cluster-autoscaler manifest/deployment.
I've found we have our NTP setup hook failing, but I don't think this is causing the main problem for the cluster-autoscaler addon.

The only other thing that I found could be related is these lines about the channels, which happened once in a minute:

Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.433668    7366 channels.go:34] apply channel output was: I0616 07:54:30.252363 2918406 addons.go:38] Loading addons channel from "s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml"
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.363256 2918406 s3context.go:213] found bucket in region "region"
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.363282 2918406 s3fs.go:290] Reading file "s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml"
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.395472 2918406 addons.go:127] Skipping version range "<1.15.0" that does not match current version 1.19.9
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.395569 2918406 addons.go:127] Skipping version range "<1.16.0" that does not match current version 1.19.9
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.401571 2918406 channel_version.go:108] Checking existing channel: Version=1.17.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=v1.15.0 ManifestHash=16d85f6fe12023eea4853cbc718e60a2fd010dd8 compared to new channel: Version=1.17.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=v1.15.0 ManifestHash=16d85f6fe12023eea4853cbc718e60a2fd010dd8
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.401728 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.407144 2918406 channel_version.go:108] Checking existing channel: Version=3.18.3-kops.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.16 ManifestHash=0f64971ab545608ff70a1bb9ee06041dbfcf9c67 compared to new channel: Version=3.18.3-kops.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.16 ManifestHash=0f64971ab545608ff70a1bb9ee06041dbfcf9c67
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.407190 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.412536 2918406 channel_version.go:108] Checking existing channel: Version=1.20.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.16 ManifestHash=64f474b1345889a1a182895b3957a5083ea00244 compared to new channel: Version=1.20.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.16 ManifestHash=64f474b1345889a1a182895b3957a5083ea00244
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.412614 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.416782 2918406 channel_version.go:108] Checking existing channel: Version=1.4.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml ManifestHash=3ffe9ac576f9eec72e2bdfbd2ea17d56d9b17b90 compared to new channel: Version=1.4.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml ManifestHash=3ffe9ac576f9eec72e2bdfbd2ea17d56d9b17b90
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.416828 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.420255 2918406 channel_version.go:108] Checking existing channel: Version=1.7.0-kops.3 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.12 ManifestHash=c39ee62b44bc391d426293c951bfcdbbd28950c9 compared to new channel: Version=1.7.0-kops.3 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.12 ManifestHash=c39ee62b44bc391d426293c951bfcdbbd28950c9
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.420354 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.423909 2918406 channel_version.go:108] Checking existing channel: Version=v0.0.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.9 ManifestHash=e1508d77cb4e527d7a2939babe36dc350dd83745 compared to new channel: Version=v0.0.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.9 ManifestHash=e1508d77cb4e527d7a2939babe36dc350dd83745
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.423952 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.427859 2918406 channel_version.go:108] Checking existing channel: Version=1.5.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml ManifestHash=2ea50e23f1a5aa41df3724630ac25173738cc90c compared to new channel: Version=1.5.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml ManifestHash=2ea50e23f1a5aa41df3724630ac25173738cc90c
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.427950 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.431759 2918406 channel_version.go:108] Checking existing channel: Version=1.20.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.12 ManifestHash=a29de974973c8ab121bc21d178cad3a2932a52c0 compared to new channel: Version=1.20.1 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.12 ManifestHash=a29de974973c8ab121bc21d178cad3a2932a52c0
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: I0616 07:54:30.431801 2918406 channel_version.go:135] Manifest Match
Jun 16 07:54:30 ip-192-168-9-105 docker[7312]: No update required

I've also tried to do the same operation on a brand new cluster created from scratch for the test, and it worked well. I guess the problem is with something in this specific cluster, which might not be correctly updated from previous kops/k8s versions. When I run kops update on the new cluster, it applies 122 changes, while the same command on the old cluster, it applies 105 changes. I'll look on watching to -v10 output to try to identify differences between both runs.

@olemarkus
Copy link
Member

None of the lines above match the CAS addon (v1.19.{0,1}, id=k8s-1.15), so it looks like it wasn't added. maybe a typo in your manifest?

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

Do you mean the cluster manifest? The cluster manifest was accepted by kops without any complaint. I was expecting it to complain if the cluster manifest is wrong. Also I've checked it, and it looks fine. the clusterAutoscaler section is almost a copy and paste from the addon docs.
Otherwise, what other manifest should I check? And how I could check it out? I'll try to download the whole cluster tree from S3, and look for cluster autoscaler. I will also try to compare these files with a new one's.

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

Oh! Wait, last change I did in the cluster was to remove the section. adding it back now.

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

Done, now I got these two new lines:

Jun 16 09:49:55 ip-192-168-9-105 docker[7312]: I0616 09:49:55.968875 3037104 channel_version.go:108] Checking existing channel: Version=1.19.2 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.15 ManifestHash=f96bb59fb9ab0c191d3cd780fde8d81d85ecc877 compared to new channel: Version=1.19.0 Channel=s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml Id=k8s-1.15 ManifestHash=3e0ac5cfe8abdc009a30f38215145cf7752997cd
Jun 16 09:49:55 ip-192-168-9-105 docker[7312]: I0616 09:49:55.968946 3037104 channel_version.go:125] New Version is less then old

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

As of #10246, I've checked my kube-system annotations, but I barely have an idea of what this should look like:

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    addons.k8s.io/cluster-autoscaler.addons.k8s.io: '{"version":"1.19.2","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.15","manifestHash":"f96bb59fb9ab0c191d3cd780fde8d81d85ecc877"}'
    addons.k8s.io/core.addons.k8s.io: '{"version":"1.4.0","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","manifestHash":"3ffe9ac576f9eec72e2bdfbd2ea17d56d9b17b90"}'
    addons.k8s.io/coredns.addons.k8s.io: '{"version":"1.7.0-kops.3","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.12","manifestHash":"c39ee62b44bc391d426293c951bfcdbbd28950c9"}'
    addons.k8s.io/dns-controller.addons.k8s.io: '{"version":"1.20.1","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.12","manifestHash":"a29de974973c8ab121bc21d178cad3a2932a52c0"}'
    addons.k8s.io/kops-controller.addons.k8s.io: '{"version":"1.20.1","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.16","manifestHash":"64f474b1345889a1a182895b3957a5083ea00244"}'
    addons.k8s.io/kubelet-api.rbac.addons.k8s.io: '{"version":"v0.0.1","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.9","manifestHash":"e1508d77cb4e527d7a2939babe36dc350dd83745"}'
    addons.k8s.io/limit-range.addons.k8s.io: '{"version":"1.5.0","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","manifestHash":"2ea50e23f1a5aa41df3724630ac25173738cc90c"}'
    addons.k8s.io/networking.projectcalico.org: '{"version":"3.18.3-kops.1","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.16","manifestHash":"0f64971ab545608ff70a1bb9ee06041dbfcf9c67"}'
    addons.k8s.io/rbac.addons.k8s.io: '{"version":"1.8.0","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"k8s-1.8","manifestHash":"5d53ce7b920cd1e8d65d2306d80a041420711914"}'
    addons.k8s.io/storage-aws.addons.k8s.io: '{"version":"1.17.0","channel":"s3://kops-bucket/example.k8s.local/addons/bootstrap-channel.yaml","id":"v1.15.0","manifestHash":"16d85f6fe12023eea4853cbc718e60a2fd010dd8"}'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"kube-system"}}
  creationTimestamp: "2020-02-13T14:57:58Z"
  name: kube-system
  resourceVersion: "57623539"
  selfLink: /api/v1/namespaces/kube-system
  uid: 48b2ad91-233b-4ade-b0eb-6fa4fb15b089
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

Should I try to remove the cluster-autoscaler addon folder from my bucket for this cluster?

@olemarkus
Copy link
Member

Yeah I noticed a bug here.
In 1.19, the autoscaler addon version is 1.19.1. In 1.20+ it changed to 1.19.0.
Quick workaround for this is to do kubectl edit ns kube-system and remove the annotation that mentions cluster-autoscaler. Then you should see the addon being deployed in about a minute.

@ifosch
Copy link
Author

ifosch commented Jun 16, 2021

The workaround worked well, thank you!

@ifosch
Copy link
Author

ifosch commented Jun 18, 2021

Thanks @olemarkus

@AnubhavSabharwa
Copy link

Is there a way that we can add or direct cluster autoscaler to use aws-use-static-instance-list= true in cluster spec?

If I am editing the deployment manually and adding command line parameter then its working but I want cluster spec to do it
reference: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md

@AnubhavSabharwa
Copy link

Is there a way that we can add or direct cluster autoscaler to use aws-use-static-instance-list= true in cluster spec?

If I am editing the deployment manually and adding command line parameter then its working but I want cluster spec to do it
reference: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md

Got the solution, we can add specific image so in 1.16.4 static instance list is supported by default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants