You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating a Windows Managed Node Group and var.manage_aws_auth_configmap = true, the AWS Console reports an AccessDenied error for worker nodes.
The instance role is present and configured correctly, however the configuration in the aws-node configmap is not correct. It is missing eks:kube-proxy-windows, which then leads to the AccessDenied issues reported in the Console.
The root cause of this appears to be that local.node_iam_role_arns_windows currently does not look at module.eks_managed_node_groups to determine if platform == "windows". So the module assumes MNGs are Linux or Bottlerocket and eks:kube-proxy-windows in the config for the Windows MNG is removed.
If your request is for a new feature, please use the Feature request template.
[ x] ✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
Re-initialize the project root to pull down modules: terraform init
Re-attempt your terraform plan or apply and check if the issue still persists
We make heavy use of terragrunt and wrap this module inside another module, but all that is needed to reproduce is to set manage_aws_auth_configmap = true and create an 2 eks_managed_node_groups. Where one is platform = "linux" or "bottlerocket", and the other node group is platform = "windows".
Expected behavior
A Windows Managed Node Group that with no connectivity issues.
Actual behavior
Windows nodes join the cluster, but have connectivity issues.
AWS Console reports AccessDenied errors under node group Health Issues.
Terminal Output Screenshot(s)
Additional context
This was tested using v18.32.1 but this most likely affects the latest release as well.
The text was updated successfully, but these errors were encountered:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
When creating a Windows Managed Node Group and
var.manage_aws_auth_configmap = true
, the AWS Console reports an AccessDenied error for worker nodes.The instance role is present and configured correctly, however the configuration in the
aws-node
configmap is not correct. It is missingeks:kube-proxy-windows
, which then leads to the AccessDenied issues reported in the Console.When
var.manage_aws_auth_configmap = false
:When
var.manage_aws_auth_configmap = true
:The root cause of this appears to be that local.node_iam_role_arns_windows currently does not look at
module.eks_managed_node_groups
to determine ifplatform == "windows"
. So the module assumes MNGs are Linux or Bottlerocket andeks:kube-proxy-windows
in the config for the Windows MNG is removed.If your request is for a new feature, please use the
Feature request
template.Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: v18.31.2
Terraform version: 1.2.2
Reproduction Code [Required]
Steps to reproduce the behavior:
We make heavy use of terragrunt and wrap this module inside another module, but all that is needed to reproduce is to set
manage_aws_auth_configmap = true
and create an 2eks_managed_node_groups
. Where one isplatform = "linux" or "bottlerocket"
, and the other node group isplatform = "windows"
.Expected behavior
A Windows Managed Node Group that with no connectivity issues.
Actual behavior
Windows nodes join the cluster, but have connectivity issues.
AWS Console reports AccessDenied errors under node group Health Issues.
Terminal Output Screenshot(s)
Additional context
This was tested using v18.32.1 but this most likely affects the latest release as well.
The text was updated successfully, but these errors were encountered: