Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittent errors in ibm_container_cluster_config (slice bounds out of range [8:0] - race condition?) #2743

Closed
vburckhardt opened this issue Jun 14, 2021 · 4 comments

Comments

@vburckhardt
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform IBM Provider Version

terraform -v
Terraform v0.15.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/helm v2.1.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.3.1
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.1
+ provider registry.terraform.io/ibm-cloud/ibm v1.26.0

Affected Resource(s)

data "ibm_container_cluster_config"

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

##############################################################################
# Login to cluster
##############################################################################

data "ibm_container_cluster_config" "cluster_config" {
  cluster_name_id = var.cluster_id
}

##############################################################################
# Config providers
##############################################################################

provider "ibm" {
  ibmcloud_api_key = var.ibmcloud_api_key
}

provider "helm" {
  kubernetes {
    host                   = data.ibm_container_cluster_config.cluster_config.host
    token                  = data.ibm_container_cluster_config.cluster_config.token
    cluster_ca_certificate = data.ibm_container_cluster_config.cluster_config.ca_certificate
  }
}

provider "kubernetes" {
  host                   = data.ibm_container_cluster_config.cluster_config.host
  token                  = data.ibm_container_cluster_config.cluster_config.token
  cluster_ca_certificate = data.ibm_container_cluster_config.cluster_config.ca_certificate
}

Debug Output

Git with full trace: https://gist.github.com/vburckhardt/49b914b363383a4113114970cbb18efe

Panic Output

Expected Behavior

Actual Behavior

Intermittent errors:
Error downloading the cluster config [c2skm3gd0nm0nnpu9uj0]: Could not login to openshift account runtime error: slice bounds out of range [8:0]

Re-running a few times the command get past this - it does seem to be some race condition?

Steps to Reproduce

  1. terraform apply

Important Factoids

VPC OpenShift clusters. Reproducable on various machines.

References

  • #0000
@vburckhardt vburckhardt changed the title Intermittent errors in ibm_container_cluster_config (race conditions?) Intermittent errors in ibm_container_cluster_config (slice bounds out of range [8:0] - race condition?) Jun 14, 2021
@jpmonge86
Copy link

jpmonge86 commented Jun 22, 2021

experiencing same issue doing terraform plan, apply or destroy, re-running one or twice usually fixes it like @vburckhardt says, but it is becoming really persistent lately.

I am running Terraform v1.0.0 and provider registry.terraform.io/ibm-cloud/ibm v1.26.2

@hkantare
Copy link
Collaborator

I think this is some kind of intermittent or race condition where the master url of the cluster is not reachable...
We are planning to add some retries on such race condition...
This is PR which is in development to handle retries
#2774

We are planning to make this fix available in next release planned in couple of days(v1.27.0)

@kavya498
Copy link
Collaborator

Available in the 1.27.0

@kavya498
Copy link
Collaborator

kavya498 commented Jul 7, 2021

Closing this issue..Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants