Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform 0.12 plan fails with Error: fork/exec /usr/local/bin/terraform: resource temporarily unavailable #21827

Closed
stahs opened this issue Jun 20, 2019 · 2 comments

Comments

@stahs
Copy link

stahs commented Jun 20, 2019

Hi,

I am running into issues running plan on terraform 0.12 on my Macbook running OSX 10.14.5 (I have also observed this behavior on Ubuntu 18.04). Terraform forks hundreds of internal-plugin child processes like:

NAME 91093   0.0  0.1  4429812  12980 s001  S+   12:01PM   0:00.02 /usr/local/bin/terraform internal-plugin provisioner chef
NAME 91073   0.0  0.1  4429480  11948 s001  S+   12:01PM   0:00.02 /usr/local/bin/terraform internal-plugin provisioner remote-exec
NAME 91060   0.0  0.1  4429376  12472 s001  S+   12:01PM   0:00.02 /usr/local/bin/terraform internal-plugin provisioner chef
NAME 91057   0.0  0.1  4428788  12536 s001  S+   12:01PM   0:00.02 /usr/local/bin/terraform internal-plugin provisioner local-exec
NAME 91056   0.0  0.1  4428788  13304 s001  S+   12:01PM   0:00.02 /usr/local/bin/terraform internal-plugin provisioner local-exec
NAME 91055   0.0  0.1  4428428  13308 s001  S+   12:01PM   0:00.03 /usr/local/bin/terraform 

This eventually leads to failure

Error: fork/exec /usr/local/bin/terraform: resource temporarily unavailable

If I monitor terraform processes in a separate window during the plan, I see this

while [ true ]; do ps auxw |grep terraform | wc -l; sleep 2; done
     620
     620
     620
     620
     620
     620
     620
     620
     619
     619
     619
     619
     619
     615
     615
     615
     614
     614
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     613
     612
     612
     612
     610
     608
     608
     653
     719
     807
     917
    1053
-bash: fork: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable

When I run plan with TF_LOG_PATH and TF_LOG, I can find these errors in the log:

grep ERROR  tf-log
var.NAME/06/20 12:18:05 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
2019/06/20 12:18:09 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
2019/06/20 12:18:44 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
var.NAME/06/20 12:18:44 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
var.NAME - *terraform.Node2019/06/20 12:18:47 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
2019/06/20 12:18:47 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
2019/06/20 12:19:15 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
var.blah_vpc_id - *terraform.NodeRoot2019/06/20 12:21:30 [ERROR] AttachSchemaTransformer: No provider config schema available for provider.terraform
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] module.NAME.module.NAME.module.NAME: eval: *terraform.EvalInitProvisioner, err: fork/exec /usr/local/bin/terraform: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] <root>: eval: *terraform.EvalInitProvider, err: fork/exec /Users/NAME/NAME/infra_terraform/.terraform/plugins/darwin_amd64/terraform-provider-template_v2.1.2_x4: resource temporarily unavailable
2019/06/20 12:22:04 [ERROR] <root>: eval: *terraform.EvalSequence, err: fork/exec /Users/NAME/NAME/infra_terraform/.terraform/plugins/darwin_amd64/terraform-provider-template_v2.1.2_x4: resource temporarily unavailable

I am using the newest version of Terraform and AWS provider.

terraform -v
Terraform v0.12.2
+ provider.aws v2.15.0
+ provider.null v2.1.2
+ provider.template v2.1.2

My repository contains ~230 AWS servers defined using a shared AWS server module which leverages chef, local-exec and remote-exec provisioners and everything has been upgraded using terraform 0.12upgrade.

The shared AWS server module structure is

resource "aws_instance" "aws_server" {
 ..
 ..

  lifecycle {
    ignore_changes = [
    ..
   ]
  }

  connection {
    type        = "ssh"
 ..
  }

  provisioner "local-exec" {
    ..
  }

  provisioner "remote-exec" {
..
    connection {
      type = "ssh"
      host = self.private_ip
    }
  }

  provisioner "chef" {
..
    connection {
      type = "ssh"
      host = self.private_ip
    }
  }

  provisioner "local-exec" {
..
 }

  provisioner "local-exec" {
.. 
 }

  provisioner "remote-exec" {
..
    connection {
      type = "ssh"
      host = self.private_ip
    }
  }
}
@apparentlymart
Copy link
Contributor

Thanks for reporting this, @stahs.

This seems to be the Unix flavor of the problem in #21584. The short version is that Terraform is starting one plugin process per provisioner block in order to validate, and so configurations with an extreme amount of provisioners will lead to an extreme amount of processes. As I mentioned in the other issue, I expect we can refactor so that Terraform can start just a single process and use it for all of the validation steps, and then only spin up the separate processes to handle the real provisioner steps, which will be limited by the Terraform concurrency limit (10 by default) and by the slow speed of provisioning, and thus cannot be so excessive.

Since this is covering the same problem as the other issue, I'm going to close this one to consolidate and remove the "windows" label from the other one to represent that it manifests in a different way on Unix-like operating systems too. Thanks again!

@ghost
Copy link

ghost commented Jul 25, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jul 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants