-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform AWS EKS 'destroy' failed module.eks.kubernetes_config_map.aws_auth[0] Error: Unauthorized #1661
Comments
I have the same issue with same module version and same cluster version. |
Unuthorized means you have something wrong with your kubernetes provider as terraform is not able to rach EKS api |
Correct, but this is out of my control as all what I have done is added users throug map_users param. and seems by the time aws_auth to be removed, cluster is gone. |
I think similar issue has been discovered, but not fixed in the latest release #1162 |
maybe this is related to #1658, please try pin kubernetes provider to lower version |
Well, creating is working fine for me. And yes, I tried several versions of kubernetes provider, it didn't help. But thank you for advice. |
@daroga0002 just in case re-verified with 2.5.1 version with exact same result. |
dows from host you are able to curl a api endpoint? |
this is actually terraform cloud, and yes it can at normal circumstance. At destroy time, cluster seems get removed first, so aws_auth is left behind untill it's too late and cluster is gone. Seems aws_auth is not something we can control from tf files point of view - this is eks module internals. |
@daroga0002 and this is easy to reproduce - create aws EKS cluster through terraform(with additional users and or roles) and try to remove it |
I am doing this constantly and no issue observed. Only thing which I have now in my mind is that you maybe corrupting a aws_auth configmap by wrong syntax or etc. |
@daroga0002 have you tried to add users through the terraform at your testing? |
yup, running this on prod. Also our example examples/managed_node_groups/main.tf is adding some user entries into aws auth and no issue observed during creation or destroy |
Hm. In my case it happens every time I destroy env. What versions are you running it at? (also I manage to use kubectl with my AWS account, so assuming configmap is fine) |
I didnt had issue with that, newest version is working fine for me (but also multiple earlier) |
for your information I tested this now again and in general it worked for me. Steps which I done:
|
Great! What versions have you used for your testing? |
module v17.22.0 |
thank you! This doesn't make sense now as I'm running on exact same versions and aws_auth is always failed to remove. Have you use additional users through map_users ? What is included in your destroy plan? |
aws_auth cm was included in destroy plan and was destroyed. In general if you require help I ask to paste here full working setup which you using to replicate this problem as without this it will not bring us anywhere |
Roles and users maps (shrinked actual list is bigger):
Is anything else on top of it will be helpful? |
|
Here is module to call eks module in the beginning of the thread:
|
plan.log |
thank you for looking into it |
Hi @daroga0002 is anything in provided info causing on issue with my code? BR |
I reviewed logs suspecting this is because token to EKS expiring. From start to executing aws_auth resource there is more than 20 minutes. Please try add to kubernetes provider something like: |
I have in my config at this moment:
|
This is unlikely related with expiration. In case I remove roles and users parameters, I can install and destroy environment with no issues. |
This issue has been automatically marked as stale because it has been open 30 days |
This issue was automatically closed because of stale in 10 days |
This issue has been resolved in version 18.0.0 🎉 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
at terraform destroy execution it's failing on
module.eks.kubernetes_config_map.aws_auth[0]
with error:
Error: Unauthorized
Versions
Reproduction
Steps to reproduce the behavior:
Are you using workspaces?: yes
Have you cleared the local cache (see Notice section above)?: yes
List steps in order that led up to the issue you encountered:
create AWS EKS cluster with custom users, run terraform destroy.
Code Snippet to Reproduce
module "eks" {
version = "17.22.0"
source = "terraform-aws-modules/eks/aws"
cluster_name = var.cluster_name
cluster_version = "1.21"
.
.
.
map_roles = var.map_roles
map_users = var.map_users
}
Expected behavior
Destroy removes EKS cluster
Actual behavior
Destroy is failing on deleting aws_auth configmap
The text was updated successfully, but these errors were encountered: