Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kubernetes_cluster - Setting default_node_pool.host_encryption_enabled to true causes configuration conflict #27225

Closed
1 task done
hmb133 opened this issue Aug 28, 2024 · 2 comments · Fixed by #27218

Comments

@hmb133
Copy link

hmb133 commented Aug 28, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.9.0

AzureRM Provider Version

4.0.1

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "this" {
  name                = "example-aks-cluster1"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "example-aks"
  #node_resource_group = "example-aks-cluster-nodes"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_DS2_v2"
    auto_scaling_enabled    = true
    host_encryption_enabled = true
    node_public_ip_enabled  = false
    max_count               = 1
    min_count               = 1
    upgrade_settings {
      max_surge = "10%"
    }
    vnet_subnet_id = azurerm_subnet.example.id
  }

  network_profile {
    network_plugin    = "azure"
    load_balancer_sku = "standard"
    outbound_type     = "userDefinedRouting"
    service_cidr      = "10.0.2.128/25"
    dns_service_ip    = "10.0.2.192"
  }
  identity {
    type = "SystemAssigned"
  }
}

Debug Output/Panic Output

│ Error: creating Kubernetes Cluster (Subscription: "xxx-xxx-xxx-xxx-xxx"
│ Resource Group Name: "xxx-xxx-xxx-xxx-xxx"
│ Kubernetes Cluster Name: "example-aks-cluster1"): performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with response: {
│   "code": "UDRWithNodePublicIPNotAllowed",
│   "details": null,
│   "message": "OutboundType UserDefinedRouting can not be combined with Node Public IP.",
│   "subcode": ""
│  }
│ 
│   with module.aks_cluster.azurerm_kubernetes_cluster.this,
│   on ../../main.tf line 1, in resource "azurerm_kubernetes_cluster" "this":
│    1: resource "azurerm_kubernetes_cluster" "this" {

Expected Behaviour

setting default_node_pool.host_encryption_enabled to true should not effect default_node_pool.node_public_ip_enabled

Actual Behaviour

If default_node_pool.host_encryption_enabled is set to false, this error does not occur. Setting default_node_pool.host_encryption_enabled to true seems to cause the provider to send a true value for default_node_pool.node_public_ip_enabled

It should also be noted that the plan suggests default_node_pool.node_public_ip_enabled is false, even though the error returned suggests otherwise.

...
      + default_node_pool {
          + auto_scaling_enabled    = true
          + fips_enabled            = true
          + host_encryption_enabled = true
          + kubelet_disk_type       = (known after apply)
          + max_count               = 4
          + max_pods                = 30
          + min_count               = 1
          + name                    = (known after apply)
          + node_count              = 3
          + node_labels             = (known after apply)
          + **node_public_ip_enabled  = false**
...

Steps to Reproduce

No response

Important Factoids

No response

References

No response

@slushysnowman
Copy link
Contributor

This is really quite a big issue - it would be great if a fix could be merged and released because this is killing our setup - every rollout now contains 'changes' and costs us like 10m extra per cluster...

Copy link

github-actions bot commented Oct 6, 2024

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 6, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.