Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops update cluster in version 1.19.0 always removes the Classical Load Balancer from AutoScaling Group #10708

Closed
hamc opened this issue Feb 2, 2021 · 6 comments

Comments

@hamc
Copy link

hamc commented Feb 2, 2021

1. What kops version are you running? The command kops version, will display
this information.

1.19.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.19.7

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops update cluster

5. What happened after the commands executed?

It removes the Classical Load Balancer from the AutoScaling Group even when the .spec.api.loadBalancer.class option is set to Classic.

6. What did you expect to happen?

As we were unable to use NLB in conjunction with the spotinst, we would like to continue using CLB. The kops should keep the LoadBalancer configuration of the Autoscaling Group when using the Classic option for .spec.api.loadBalancer.class.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

I0201 23:57:41.073057    8512 featureflag.go:154] FeatureFlag "Spotinst"=true
I0201 23:57:41.073102    8512 featureflag.go:154] FeatureFlag "SpotinstOcean"=true
I0201 23:57:41.073107    8512 featureflag.go:154] FeatureFlag "SpotinstHybrid"=true
Using cluster from kubectl context: clustertest-sa-east-1.k8s.local

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T14:11:32Z"
  generation: 4
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: kafka-nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: t3.medium
  maxSize: 10
  minSize: 3
  nodeLabels:
    kops.k8s.io/instancegroup: kafka-nodes
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c
  suspendProcesses:I0201 23:59:27.055689    8572 featureflag.go:167] FeatureFlag "Spotinst"=true
I0201 23:59:27.055766    8572 featureflag.go:167] FeatureFlag "SpotinstOcean"=true
I0201 23:59:27.055773    8572 featureflag.go:167] FeatureFlag "SpotinstHybrid"=true
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2020-09-17T13:22:43Z"
  generation: 7
  name: clustertest-sa-east-1.k8s.local
spec:
  api:
    loadBalancer:
      class: Classic
      type: Internal
  authentication:
    aws: {}
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    creation-tool: kops
    tenant: clustertest
  cloudProvider: aws
  configBase: s3://k8s-clusters-config/clustertest-sa-east-1.k8s.local
  containerRuntime: docker
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-sa-east-1a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-sa-east-1a
      name: a
    memoryRequest: 100Mi
    name: events
  fileAssets:
  - content: |
      apiVersion: audit.k8s.io/v1
      kind: Policy
      rules:
          - level: None
            nonResourceURLs:
                - '/healthz*'
                - '/logs'
                - '/metrics'
                - '/swagger*'
                - '/version'
          - level: Metadata
            omitStages:
                - RequestReceived
            resources:
                - group: authentication.k8s.io
                  resources:
                      - tokenreviews
          - level: RequestResponse
            omitStages:
                - RequestReceived
            resources:
                - group: authorization.k8s.io
                  resources:
                      - subjectaccessreviews
          - level: RequestResponse
            omitStages:
                - RequestReceived
            resources:
                - group: ''
                  resources: ['pods']
                  verbs: ['create', 'patch', 'update', 'delete']
          - level: Metadata
            omitStages:
                - RequestReceived
    name: audit-policy-config
    path: /srv/kubernetes/audit/policy-config.yaml
    roles:
    - Master
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    auditLogMaxAge: 10
    auditLogMaxBackups: 1
    auditLogMaxSize: 100
    auditLogPath: /var/log/kubernetes/apiserver/audit.log
    auditPolicyFile: /srv/kubernetes/audit/policy-config.yaml
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - 10.19.0.0/16
  kubernetesVersion: 1.19.7
  masterInternalName: api.internal.clustertest-sa-east-1.k8s.local
  masterPublicName: api.clustertest-sa-east-1.k8s.local
  networkCIDR: 10.19.0.0/16
  networking:
    weave:
      mtu: 8912
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 10.19.0.0/16
  subnets:
  - cidr: 10.19.32.0/19
    name: sa-east-1a
    type: Private
    zone: sa-east-1a
  - cidr: 10.19.64.0/19
    name: sa-east-1c
    type: Private
    zone: sa-east-1c
  - cidr: 10.19.0.0/22
    name: utility-sa-east-1a
    type: Utility
    zone: sa-east-1a
  - cidr: 10.19.4.0/22
    name: utility-sa-east-1c
    type: Utility
    zone: sa-east-1c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T14:11:32Z"
  generation: 4
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: kafka-nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: t3.medium
  maxSize: 10
  minSize: 3
  nodeLabels:
    kops.k8s.io/instancegroup: kafka-nodes
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c
  suspendProcesses:
  - AZRebalance
  taints:
  - type=kafka:NoSchedule

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T13:22:44Z"
  generation: 4
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: master-sa-east-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-sa-east-1a
  role: Master
  subnets:
  - sa-east-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T13:22:44Z"
  generation: 9
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.xlarge
  maxSize: 0
  minSize: 0
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c
  suspendProcesses:
  - AZRebalance

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-01-13T13:39:28Z"
  generation: 10
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
    spotinst.io/hybrid: "true"
    spotinst.io/ocean-instance-types-whitelist: c4.4xlarge, c4.2xlarge, c5.4xlarge, c5.xlarge, c5.2xlarge, c5d.2xlarge, c5d.4xlarge, c5d.xlarge, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.2xlarge, m5.xlarge, m5.4xlarge, m5a.2xlarge, m5a.4xlarge, m5a.xlarge, m5ad.2xlarge, m5ad.4xlarge, m5ad.xlarge, m5d.2xlarge, m5d.4xlarge, m5d.xlarge, r4.xlarge, r4.4xlarge, r4.2xlarge, r5.2xlarge, r5.4xlarge, r5.xlarge, r5a.2xlarge, r5a.4xlarge, r5a.xlarge, r5ad.2xlarge, r5ad.4xlarge, r5ad.xlarge, r5d.2xlarge, r5d.4xlarge, r5d.xlarge
    spotinst.io/utilize-reserved-instances: "false"
  name: nodes-spot
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.xlarge
  maxSize: 20
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-spot
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c
  - AZRebalance
  taints:
  - type=kafka:NoSchedule

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T13:22:44Z"
  generation: 4
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: master-sa-east-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-sa-east-1a
  role: Master
  subnets:
  - sa-east-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-09-17T13:22:44Z"
  generation: 9
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
  name: nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.xlarge
  maxSize: 0
  minSize: 0
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c
  suspendProcesses:
  - AZRebalance

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-01-13T13:39:28Z"
  generation: 10
  labels:
    kops.k8s.io/cluster: clustertest-sa-east-1.k8s.local
    spotinst.io/hybrid: "true"
    spotinst.io/ocean-instance-types-whitelist: c4.4xlarge, c4.2xlarge, c5.4xlarge,
      c5.xlarge, c5.2xlarge, c5d.2xlarge, c5d.4xlarge, c5d.xlarge, m4.xlarge, m4.2xlarge,
      m4.4xlarge, m5.2xlarge, m5.xlarge, m5.4xlarge, m5a.2xlarge, m5a.4xlarge, m5a.xlarge,
      m5ad.2xlarge, m5ad.4xlarge, m5ad.xlarge, m5d.2xlarge, m5d.4xlarge, m5d.xlarge,
      r4.xlarge, r4.4xlarge, r4.2xlarge, r5.2xlarge, r5.4xlarge, r5.xlarge, r5a.2xlarge,
      r5a.4xlarge, r5a.xlarge, r5ad.2xlarge, r5ad.4xlarge, r5ad.xlarge, r5d.2xlarge,
      r5d.4xlarge, r5d.xlarge
    spotinst.io/utilize-reserved-instances: "false"
  name: nodes-spot
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
  machineType: c5.xlarge
  maxSize: 20
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-spot
  role: Node
  subnets:
  - sa-east-1a
  - sa-east-1c

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

kops-1.19.0 update cluster 
Will modify resources:
  AutoscalingGroup/master-sa-east-1a.masters.clustertest-sa-east-1.k8s.local
        LoadBalancers            [name:api-clustertest-sa-east-1-k-afoj5t id:api-clustertest-sa-east-1-k-afoj5t] -> []

9. Anything else do we need to know?

@rifelpet rifelpet added this to the v1.19 milestone Feb 4, 2021
@olemarkus olemarkus modified the milestones: v1.19, v1.21 Apr 8, 2021
@johngmyers johngmyers removed this from the v1.21 milestone Jun 10, 2021
@liranp
Copy link
Contributor

liranp commented Aug 17, 2021

See #10961.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2021
@olemarkus
Copy link
Member

Has this one been resolved now?

@liranp
Copy link
Contributor

liranp commented Nov 17, 2021

That's right -- #10708 (comment).

@olemarkus
Copy link
Member

Thanks

/close
/remove-lifecycle stale

@k8s-ci-robot
Copy link
Contributor

@olemarkus: Closing this issue.

In response to this:

Thanks

/close
/remove-lifecycle stale

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants