Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kubernetes_cluster should use either service_principal OR identity. #6347

Closed
asubmani opened this issue Apr 3, 2020 · 10 comments
Closed
Assignees
Labels
breaking-change service/kubernetes-cluster upstream/microsoft Indicates that there's an upstream issue blocking this issue/PR
Milestone

Comments

@asubmani
Copy link
Contributor

asubmani commented Apr 3, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.24
azurerm v2.4.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC" {
/* service_principal {
    client_id         = var.K8SSPId
    client_secret     = var.K8SSPSecret
  }*/

  identity {
    type = "SystemAssigned"
  }
}

Debug Output

Error: Missing required argument

 Error: "service_principal": required field is not set

  on Modules/02_AKS_ClusterwithRBAC/Main.tf line 7, in resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC":
   7: resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC" {

Expected Behavior

The plan should succeed as the Identity is defined.

Actual Behavior

The module complains about service_principal not defined, when the alternate i.e. identity is provided. If I use both this works. However this is not acceptable as in AAD you either use SPN or Managed Identities (recommended as they are password less)

Steps to Reproduce

Remove or comment out the service_principal section in the azurerm_kubernetes_cluster resource.

/*service_principal {
    client_id         = <Client ID of the SPN>
    client_secret     = <secret of the SPN>
  } */

Add identity section (to replace the service_principal section

identity {
    type = "SystemAssigned"
  }

If both are enabled, then identity is created. However we want to stop using SPNs for the cluster.

  1. terraform plan

Important Factoids

As of March 19 2020. This feature is GA in AKS. Azure/AKS#993

References

@asubmani
Copy link
Contributor Author

asubmani commented Apr 3, 2020

I would also take this opportunity to cover the below:

if an update that requires modification of the azurerm_kubernetes_cluster resource (without re-deployment) then check if that re-generates the resourceId of the "Managed Identity". If yes, this would break all downstream RBAC.

would expect TF to check if the identity would change or changed after terraform apply

@heoelri
Copy link
Contributor

heoelri commented Apr 4, 2020

Thank you @asubmani for bringing that up - i had exactly the same issue today. I'm able to deploy my AKS cluster once using managed identity successfull, but the second run will always fail as client_secret in service_principal is different.

  service_principal {
    client_id     = var.client_id # defaults to msi 
    client_secret = var.client_secret # cannot be null
  }
  identity {
      type = "SystemAssigned"
  }
~ resource "azurerm_kubernetes_cluster" "deployment" {
      ...
      ~ service_principal {
            client_id     = "msi"
          ~ client_secret = (sensitive value)
        }
    }
Error: Error updating Service Principal for Kubernetes Cluster "suitabletunaaks" (Resource Group "cet-rampup-weu"): containerservice.ManagedClustersClient#ResetServicePrincipalProfile: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Updating service principal profile is not allowed on MSI cluster."

  on kubernetes-cluster.tf line 10, in resource "azurerm_kubernetes_cluster" "deployment":
  10: resource "azurerm_kubernetes_cluster" "deployment" {

It would be great to not require service_principal when identity is set (and vice versa) and set client_secret to null and client_id to msi automatically. But it seems like this comes with some complexity.

I tried to solve it on my own but i don't have much experience with contributing to a terraform provider, yet. Here's my shot master...heoelri:patch-2 (@tombuildsstuff it would be great if you could take a look in case you haven't already finished it)

@tombuildsstuff tombuildsstuff added this to the v2.5.0 milestone Apr 7, 2020
@tombuildsstuff
Copy link
Contributor

@heoelri sorry just seen this, thanks for taking a look - unfortunately since you've posted this comment the fix for this has been merged via #6095 - so thanks for taking a look, but the fix for this will ship in v2.5 of the Azure Provider in the near future.

Since this has been fixed via #6095 I'm going to close this issue for the moment - but this'll ship in v2.5 of the Azure Provider in the near-future.

Thanks!

@ghost
Copy link

ghost commented Apr 9, 2020

This has been released in version 2.5.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.5.0"
}
# ... other configuration ...

@andrediasbr
Copy link

I'm getting "Unsupported attribute" when using azurerm_kubernetes_cluster.example.kubelet_identity.object_id with azurerm 2.6.0.

@heoelri
Copy link
Contributor

heoelri commented Apr 23, 2020

Try using the following syntax azurerm_kubernetes_cluster.<name>.kubelet_identity.0.object_id

@andrediasbr
Copy link

@heoelri azurerm_kubernetes_cluster.aks.kubelet_identity is empty list of object
The given key does not identify an element in this collection value.

@heoelri
Copy link
Contributor

heoelri commented Apr 23, 2020

Ok, wired. Have you deployed your cluster with MI enabled? Here's my config:

provider "azurerm" {
  version                     = ">= 2.6.0"
  [..]
}

resource "azurerm_kubernetes_cluster" "deployment" {
  [..]
  identity {
      type = "SystemAssigned"
  }
  [..]
}

resource "azurerm_role_assignment" "acrpull_role" {
  scope                            = data.azurerm_subscription.primary.id 
  role_definition_name             = "AcrPull"
  principal_id                     = azurerm_kubernetes_cluster.deployment.kubelet_identity.0.object_id 
  skip_service_principal_aad_check = true
}

@andrediasbr
Copy link

@heoelri I got the issue. If you deploy AKS and role assignment for the first time together, you get the error I mentioned. Howerver, if you deploy aks and then deploy the role assigment, work fine. It seems to be a bug. Thanks! :-)

@ghost
Copy link

ghost commented May 7, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators May 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
breaking-change service/kubernetes-cluster upstream/microsoft Indicates that there's an upstream issue blocking this issue/PR
Projects
None yet
Development

No branches or pull requests

4 participants