Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_config_map causing terraform destroy to fail? #812

Closed
1 of 4 tasks
b2cbre opened this issue Mar 20, 2020 · 9 comments · Fixed by #815
Closed
1 of 4 tasks

kubernetes_config_map causing terraform destroy to fail? #812

b2cbre opened this issue Mar 20, 2020 · 9 comments · Fixed by #815

Comments

@b2cbre
Copy link
Contributor

b2cbre commented Mar 20, 2020

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

Terraform destroy reliably results in Error: Get https://SNIP.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth: dial tcp IP:443: i/o timeout (IP:443 is the IP of the endpoint). This occurs across AWS accounts. Could be something I have done.

If this is a bug, how to reproduce? Please include a code sample if relevant.

terraform {
  required_version = "~> 0.12.19"
}

resource "aws_ebs_encryption_by_default" "default" {
  enabled = true
}

module "eks" {
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=9951c87a86b02e0f61a4d1560ad2e6e9596000ed"

  cluster_name = "dev-k8s"

  cluster_version = "1.15"

  node_groups_defaults = {
    disk_size        = 100
    min_capacity     = 1
    desired_capacity = 1
    max_capacity     = 3
  }

  node_groups = {
    a = {
      instance_type = "m5.large"
      k8s_labels = {
        Environment = "dev"
      }
    }
  }

  attach_worker_cni_policy      = true
  manage_cluster_iam_resources  = true
  manage_worker_iam_resources   = true
  cluster_create_security_group = true
  worker_create_security_group  = true

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false

  vpc_id = "YOUR VPC ID"
  subnets = [
    "YOUR SUBNET(S)",
  ]
}

What's the expected behavior?

Destroy happen without error on this resource.

Are you able to fix this problem and submit a PR? Link here if you have already.

I work around the issue by running terraform state rm 'module.eks.module.eks.kubernetes_config_map.aws_auth[0]'.

Environment details

  • Affected module version: 9951c87a86b02e0f61a4d1560ad2e6e9596000ed, but not in v10.0.0.
  • OS:
  • Terraform version: 0.12.19

Any other relevant info

I may be causing this with some configuration or it may be a new behavior. I am willing to help. :)

@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 20, 2020

This takes place across AWS accounts, and with or without node groups or worker ASGs (just sharing though I do not think it relevant).

@dpiddockcmp
Copy link
Contributor

Could this be related to #745 that you are pinned to? It creates a security group rule when cluster_endpoint_public_access = false and manage_aws_auth = true. Maybe that rule is being deleted before attempting to remove the ConfigMap?

Output from the destroy command would help to debug.

@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 20, 2020

Good points, @dpiddockcmp, and thank you. :)

A newer commit fixed the issue.

module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Destruction complete after 0s

I should have tested that before opening a ticket while half asleep.

@b2cbre b2cbre closed this as completed Mar 20, 2020
@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 20, 2020

Just encountered it again but this type on commit e768c6c1038b8545fa7f4746dc6f04422783fee5. Very interesting.

module.eks.data.aws_vpc.vpc: Refreshing state...
module.eks.module.eks.data.aws_region.current: Refreshing state...
module.eks.module.eks.data.aws_caller_identity.current: Refreshing state...
module.eks.module.eks.data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state...
module.eks.module.eks.data.aws_ami.eks_worker_windows: Refreshing state...
module.eks.data.aws_subnet.subnets[1]: Refreshing state...
module.eks.aws_ebs_encryption_by_default.default: Refreshing state... [id=terraform-20200320173018878900000003]
module.eks.module.eks.data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state...
module.eks.data.aws_subnet.subnets[0]: Refreshing state...
module.eks.data.aws_region.current: Refreshing state...
module.eks.data.aws_subnet.subnets[2]: Refreshing state...
module.eks.data.aws_caller_identity.current: Refreshing state...
module.eks.module.eks.data.aws_ami.eks_worker: Refreshing state...
module.eks.module.eks.aws_iam_role.cluster[0]: Refreshing state... [id=gcso-k8-dev20200320173018644600000002]
module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]: Refreshing state... [id=gcso-k8-dev20200320173018644600000002-20200320173019440100000004]
module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy[0]: Refreshing state... [id=gcso-k8-dev20200320173018644600000002-20200320173019466900000005]
module.eks.module.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-0c4ebf6aed9505756]
module.eks.module.eks.aws_security_group_rule.cluster_egress_internet[0]: Refreshing state... [id=sgrule-3056391628]
module.eks.module.eks.aws_eks_cluster.this[0]: Refreshing state... [id=gcso-k8-dev]
module.eks.module.eks.aws_iam_role.workers[0]: Refreshing state... [id=gcso-k8-dev20200320174421622400000006]
module.eks.data.aws_eks_cluster.cluster: Refreshing state...
module.eks.data.aws_eks_cluster_auth.cluster: Refreshing state...
module.eks.module.eks.null_resource.wait_for_cluster[0]: Refreshing state... [id=612729030967023617]
module.eks.module.eks.data.template_file.kubeconfig[0]: Refreshing state...
module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Refreshing state... [id=sgrule-3178428600]
module.eks.module.eks.aws_security_group.workers[0]: Refreshing state... [id=sg-0479fbb462c0f673b]
module.eks.data.aws_iam_policy_document.cluster_autoscaler: Refreshing state...
module.eks.module.eks.local_file.kubeconfig[0]: Refreshing state... [id=2d11f3f88c68dd65b9703145d4938de46a04ae49]
module.eks.aws_iam_policy.cluster_autoscaler: Refreshing state... [id=arn:aws:iam::876240200996:policy/cluster-autoscaler20200320202252144000000001]
module.eks.data.helm_repository.stable: Refreshing state...
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Refreshing state... [id=gcso-k8-dev20200320174421622400000006-20200320174422416000000009]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Refreshing state... [id=gcso-k8-dev20200320174421622400000006-2020032017442242790000000a]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Refreshing state... [id=gcso-k8-dev20200320174421622400000006-20200320174422409000000008]
module.eks.module.eks.aws_security_group_rule.workers_ingress_self[0]: Refreshing state... [id=sgrule-1116894250]
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Refreshing state... [id=sgrule-3183286683]
module.eks.module.eks.aws_security_group_rule.workers_egress_internet[0]: Refreshing state... [id=sgrule-910221131]
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Refreshing state... [id=sgrule-2528348430]
module.eks.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Refreshing state... [id=sgrule-4123228219]
module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_additional_policies[0]: Refreshing state... [id=gcso-k8-dev20200320174421622400000006-20200320202253070900000002]
module.eks.module.eks.data.null_data_source.node_groups[0]: Refreshing state...
module.eks.module.eks.module.node_groups.random_pet.node_groups["a"]: Refreshing state... [id=direct-fish]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Refreshing state... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish]
module.eks.helm_release.autoscaler: Refreshing state... [id=autoscaler]
module.eks.helm_release.autoscaler: Destroying... [id=autoscaler]
module.eks.aws_ebs_encryption_by_default.default: Destroying... [id=terraform-20200320173018878900000003]
module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Destroying... [id=sgrule-3178428600]
module.eks.module.eks.aws_security_group_rule.cluster_egress_internet[0]: Destroying... [id=sgrule-3056391628]
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Destroying... [id=sgrule-2528348430]
module.eks.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Destroying... [id=sgrule-4123228219]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_additional_policies[0]: Destroying... [id=gcso-k8-dev20200320174421622400000006-20200320202253070900000002]
module.eks.module.eks.aws_security_group_rule.workers_ingress_self[0]: Destroying... [id=sgrule-1116894250]
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Destroying... [id=sgrule-3183286683]
module.eks.aws_ebs_encryption_by_default.default: Destruction complete after 0s
module.eks.module.eks.aws_security_group_rule.workers_egress_internet[0]: Destroying... [id=sgrule-910221131]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_additional_policies[0]: Destruction complete after 0s
module.eks.aws_iam_policy.cluster_autoscaler: Destroying... [id=arn:aws:iam::876240200996:policy/cluster-autoscaler20200320202252144000000001]
module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Destruction complete after 1s
module.eks.module.eks.aws_security_group_rule.cluster_egress_internet[0]: Destruction complete after 1s
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Destruction complete after 1s
module.eks.aws_iam_policy.cluster_autoscaler: Destruction complete after 1s
module.eks.module.eks.aws_security_group_rule.workers_ingress_self[0]: Destruction complete after 1s
module.eks.module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Destruction complete after 1s
module.eks.module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Destruction complete after 2s
module.eks.module.eks.aws_security_group_rule.workers_egress_internet[0]: Destruction complete after 2s
module.eks.module.eks.aws_security_group.workers[0]: Destroying... [id=sg-0479fbb462c0f673b]
module.eks.module.eks.aws_security_group.workers[0]: Destruction complete after 1s
module.eks.helm_release.autoscaler: Still destroying... [id=autoscaler, 10s elapsed]
module.eks.helm_release.autoscaler: Destruction complete after 10s
module.eks.module.eks.local_file.kubeconfig[0]: Destroying... [id=2d11f3f88c68dd65b9703145d4938de46a04ae49]
module.eks.module.eks.local_file.kubeconfig[0]: Destruction complete after 0s
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 10s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 20s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 30s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 40s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 50s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m0s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m10s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m20s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m30s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m40s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 1m50s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m0s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m10s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m20s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m30s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m40s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 2m50s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 3m0s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Still destroying... [id=gcso-k8-dev:gcso-k8-dev-a-direct-fish, 3m10s elapsed]
module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["a"]: Destruction complete after 3m11s
module.eks.module.eks.module.node_groups.random_pet.node_groups["a"]: Destroying... [id=direct-fish]
module.eks.module.eks.module.node_groups.random_pet.node_groups["a"]: Destruction complete after 0s
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Destroying... [id=gcso-k8-dev20200320174421622400000006-20200320174422416000000009]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Destroying... [id=gcso-k8-dev20200320174421622400000006-20200320174422409000000008]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Destroying... [id=gcso-k8-dev20200320174421622400000006-2020032017442242790000000a]
module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Destroying... [id=kube-system/aws-auth]
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]: Destruction complete after 0s
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]: Destruction complete after 0s
module.eks.module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]: Destruction complete after 0s
module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Still destroying... [id=kube-system/aws-auth, 10s elapsed]
module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Still destroying... [id=kube-system/aws-auth, 20s elapsed]
module.eks.module.eks.kubernetes_config_map.aws_auth[0]: Still destroying... [id=kube-system/aws-auth, 30s elapsed]

Warning: Quoted type constraints are deprecated

  on .terraform/modules/eks.metadata/variables.tf line 3, in variable "apm_id":
   3:   type        = "string"

Terraform 0.11 and earlier required type constraints to be given in quotes,
but that form is now deprecated and will be removed in a future version of
Terraform. To silence this warning, remove the quotes around "string".

(and 16 more similar warnings elsewhere)


Error: Delete https://0AD1BD8869AD191DA9F97FD5C2760A0F.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth: dial tcp 10.179.7.167:443: i/o timeout

@b2cbre b2cbre reopened this Mar 20, 2020
@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 20, 2020

Encounter the error on plan and apply now too. I'm thinking through why.

I have checked my connection to the internet. :)

@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 21, 2020

Could it be because the endpoint is a private IP since I am on the VPC network via a VPN and the rules that allow communication with the API are removed prior to the command to destroy the config map?

module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Destruction complete after 1s

Yes, it did just occur to me that I did not share that I was on a VPN that can communicate with the AWS VPC. :/

@dpiddockcmp
Copy link
Contributor

Yes, very much is the new security rule. There is no dependency between it and the kubernetes_configmap resource.

module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Destroying... [id=sgrule-3178428600]
module.eks.module.eks.aws_security_group_rule.cluster_private_access[0]: Destruction complete after 1s
module.eks.module.eks.local_file.kubeconfig[0]: Destroying... [id=2d11f3f88c68dd65b9703145d4938de46a04ae49]
module.eks.module.eks.local_file.kubeconfig[0]: Destruction complete after 0s

Does not present as an issue when creating the cluster due to the null_resource.wait_for_cluster

@b2cbre
Copy link
Contributor Author

b2cbre commented Mar 23, 2020

Ok, I created a PR to fix this if it passes review.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 27, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
2 participants