Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform 0.12.14 panic during show plan #23377

Closed
BookOfGreg opened this issue Nov 14, 2019 · 16 comments · Fixed by #23581
Closed

Terraform 0.12.14 panic during show plan #23377

BookOfGreg opened this issue Nov 14, 2019 · 16 comments · Fixed by #23581
Labels
bug cli crash v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@BookOfGreg
Copy link

BookOfGreg commented Nov 14, 2019

Terraform Version

Terraform v0.12.14
+ provider.aws v2.35.0
+ provider.random v2.2.1

Crash Output

https://gist.github.com/BookOfGreg/97d0dca47b9e3cff5889b50d90270188

Expected Output

Actual Output

a normal plan
...
     + setting {
          + name      = "VPCId"
          + namespace = "aws:ec2:vpc"
          + value     = "vpc-myvpcid"
        }
    }

  # aws_security_group.utils will be updated in-place
  ~ resource "aws_security_group" "utils" {
        arn                    = "arn:aws:ec2:eu-west-2:myaccount:security-group/sg-mygroup"
        description            = "Utils security group"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x720776]
...
the panic
...
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
        egress                 = [
            {
                cidr_blocks      = [
                    "0.0.0.0/0",
                ]

It crashes and continues to output mid-run.

Steps to Reproduce

Cannot reproduce, happened once, subsequent runs of terraform show plan on the same plan file did not crash.
I have a plan that will crash terraform relatively consistently, I'll see if I can strip it of sensitive info for sharing.

Additional Context

Upgraded from 0.12.13 due to this bug:
#21949
0.12.13 did not crash but also did not have expected behavior for show plan

References

@ghost ghost added bug crash labels Nov 14, 2019
@hashibot hashibot added cli v0.12 Issues (primarily bugs) reported against v0.12 releases labels Nov 15, 2019
@danieldreier
Copy link
Contributor

@BookOfGreg thanks for reporting this! Have you been able to put together a demonstration config that reproduces this issue?

@BookOfGreg
Copy link
Author

Unfortunately my boss hasn't allowed me to put time into this yet. I will provide one when I can which unfortunately might not be soon.

@lifeofguenter
Copy link

Having similar issue. plan + apply are working, but show is crashing. Downgrading to 0.12.13 solves the issue.

@pkolyvas pkolyvas added the waiting-response An issue/pull request is waiting for a response from the community label Nov 19, 2019
@lifeofguenter
Copy link

@pkolyvas what do you need? This is failing on multiple terraform projects.

@ghost ghost removed the waiting-response An issue/pull request is waiting for a response from the community label Nov 21, 2019
@BookOfGreg
Copy link
Author

A set of HCL (preferrably small) that causes the crash as a reproduction so it can be debugged.
Unfortunately I work for a bigcorp and my boss won't allow me to contribute and I'd rather not be sued over it. Hopefully you're in a better position to help (Sorry Hashicorp team!)

@valorl
Copy link

valorl commented Nov 21, 2019

I just ran into this as well on 0.12.16. It's easily reproducible for me with this simple example:

provider "aws" {
  region = "eu-central-1"
}
resource "aws_vpc" "test-aws-vpc" {
  cidr_block = "10.123.4.0/24"
}

This is the process for me:

  1. terraform plan -out=plan.tfplan
  2. terraform show plan.tfplan
  3. No issues so far. Shows a plan from the file with a full addition of the new VPC
  4. terraform apply to get the VPC created
  5. Make a change the cidr_block of the aws_vpc, e.g. to 10.123.3.0/24
  6. terraform plan -out =plan.tfplan, so far so good, outputs a plan containing a replacement of the VPC
  7. terraform show plan.tfplan
  8. panic (see trace below)

I was only able to reproduce it if the plan is for the VPC to be replaced. If it's added or destroyed, or both (e.g. by renaming the resource) then it seems to work.

Regarding the downgrade workaround, is there any easy way for me to downgrade the state file ?

Panic log:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x720776]

goroutine 1 [running]:
github.com/zclconf/go-cty/cty/set.Set.Has(0x0, 0x0, 0x0, 0x1ce2f00, 0xc0000ad8a0, 0xc0003da38e)
	/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/set/ops.go:54 +0x26
github.com/zclconf/go-cty/cty.PathSet.Has(...)
	/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/path_set.go:53
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).pathForcesNewResource(0xc00066b5a8, 0xc000b80c00, 0x1, 0x3, 0x3480e20)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:1036 +0xb9
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeValueDiff(0xc00066b5a8, 0x230ff80, 0xc0000c6390, 0x1a6b180, 0xc0009744a0, 0x230ff80, 0xc0000c6390, 0x1b2a9c0, 0x3480e20, 0x8, ...)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:1005 +0xf9b
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeAttrDiff(0xc00066b5a8, 0xc000660f27, 0x3, 0xc000753c80, 0x230ff80, 0xc0000c6390, 0x1a6b180, 0xc0009744a0, 0x230ff80, 0xc0000c6390, ...)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:263 +0x4c4
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeBlockBodyDiff(0xc00066b5a8, 0xc00014f8b0, 0x2310080, 0xc000494068, 0x1b10f20, 0xc000b81830, 0x2310080, 0xc000494070, 0x1b10f20, 0xc000b818c0, ...)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:196 +0x5d5
github.com/hashicorp/terraform/command/format.ResourceChange(0xc0002c4300, 0x0, 0xc00014f8b0, 0xc000974230, 0x4d, 0xc0006c00a0)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:140 +0x5ff
github.com/hashicorp/terraform/backend/local.RenderPlan(0xc00012e070, 0xc000494040, 0xc00014f710, 0x2323d20, 0xc00056d700, 0xc000974230)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/backend/local/backend_plan.go:288 +0x68c
github.com/hashicorp/terraform/command.(*ShowCommand).Run(0xc000585ba0, 0xc0000b4170, 0x1, 0x1, 0xc0000b3af0)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/show.go:165 +0xb04
github.com/mitchellh/cli.(*CLI).Run(0xc00036e280, 0xc00036e280, 0xc00054dd90, 0x1)
	/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/mitchellh/[email protected]/cli.go:255 +0x1f1
main.wrappedMain(0x0)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:238 +0xc34
main.realMain(0x0)
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:102 +0xb4
main.main()
	/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:38 +0x3b

@lifeofguenter
Copy link

@valorl thanks so much for taking the time and creating a reproducible case.

Unfortunately that was our issue as well, e.g. issues happened during security-group updates (as also seen in OP's initial report) and not during adding or deleting resources. This is also why we only noticed this breaking issue after we have already upgraded projects.

Which also brings us to this:

Regarding the downgrade workaround, is there any easy way for me to downgrade the state file ?

This is unfortunately not possible and completely bit us in the ass this time. We had to comment out any terraform show from our ci/cd pipeline (luckily its not required for us to deploy stuff).

We are hoping, going forward, that a test-case will be implemented against this, and are patiently looking forward to a fix

@valorl
Copy link

valorl commented Nov 21, 2019

@lifeofguenter I was able to downgrade the statefile by manually changing terraform_version to 0.12.13 (my current use-case it's not production so I gave it a shot)

@valorl
Copy link

valorl commented Nov 21, 2019

This bug could be related to some changes made to show after .13. I can see that the show works in a much more limited way in .13. If I do show plan.tfplan it only shows a limited version of the diff, e.g.

+ module.vpc.aws_vpc.tf-network-aws-vpc

In .16 it was showing the full diff, equivalent to the output of plan (which makes a lot more sense).

Just thought I'd mention that in case someone knows about some PRs or commits that could be referenced here.

@php-denken
Copy link

I can confirm this crash for 0.12.14 and 0.12.16. Just write here to get updates on this and deliver an additional crash log.

`2019/11/21 15:47:03 [INFO] Terraform version: 0.12.16
2019/11/21 15:47:03 [INFO] Go runtime version: go1.12.13
2019/11/21 15:47:03 [INFO] CLI args: []string{"/bin/terraform", "show", "web_non_prod.plan"}
2019/11/21 15:47:03 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2019/11/21 15:47:03 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/11/21 15:47:03 [INFO] CLI command args: []string{"show", "web_non_prod.plan"}
2019/11/21 15:47:03 [TRACE] Meta.Backend: BackendOpts.Config not set, so using settings loaded from main.tf:13,3-15
2019/11/21 15:47:03 [TRACE] Meta.Backend: built configuration for "s3" backend with hash value 704415183
2019/11/21 15:47:03 [TRACE] Preserving existing state lineage "3141ea62-f0d1-a0f3-0a4e-7df8dfd741f8"
2019/11/21 15:47:03 [TRACE] Preserving existing state lineage "3141ea62-f0d1-a0f3-0a4e-7df8dfd741f8"
2019/11/21 15:47:03 [TRACE] Meta.Backend: working directory was previously initialized for "s3" backend
2019/11/21 15:47:03 [TRACE] Meta.Backend: using already-initialized, unchanged "s3" backend configuration
2019/11/21 15:47:03 [INFO] Setting AWS metadata API timeout to 100ms
2019/11/21 15:47:04 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/11/21 15:47:04 [INFO] AWS Auth provider used: "EnvProvider"
2019/11/21 15:47:04 [DEBUG] Trying to get account information via sts:GetCallerIdentity
2019/11/21 15:47:04 [TRACE] Meta.Backend: instantiated backend of type *s3.Backend
2019/11/21 15:47:04 [DEBUG] checking for provider in "."
2019/11/21 15:47:04 [DEBUG] checking for provider in "/bin"
2019/11/21 15:47:04 [DEBUG] checking for provider in ".terraform/plugins/linux_amd64"
2019/11/21 15:47:04 [DEBUG] found provider "terraform-provider-aws_v2.38.0_x4"
2019/11/21 15:47:04 [DEBUG] found provider "terraform-provider-template_v2.1.2_x4"
2019/11/21 15:47:04 [DEBUG] found valid plugin: "aws", "2.38.0", "/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4"
2019/11/21 15:47:04 [DEBUG] found valid plugin: "template", "2.1.2", "/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4"
2019/11/21 15:47:04 [DEBUG] checking for provisioner in "."
2019/11/21 15:47:04 [DEBUG] checking for provisioner in "/bin"
2019/11/21 15:47:04 [DEBUG] checking for provisioner in ".terraform/plugins/linux_amd64"
2019/11/21 15:47:04 [TRACE] Meta.Backend: backend *s3.Backend does not support operations, so wrapping it in a local backend
2019/11/21 15:47:04 [TRACE] backend/local: requesting state manager for workspace "default"
2019/11/21 15:47:05 [TRACE] backend/local: requesting state lock for workspace "default"
2019/11/21 15:47:05 [TRACE] backend/local: reading remote state for workspace "default"
2019/11/21 15:47:05 [TRACE] backend/local: retrieving local state snapshot for workspace "default"
2019/11/21 15:47:05 [TRACE] backend/local: building context from plan file
2019/11/21 15:47:05 [TRACE] terraform.NewContext: starting
2019/11/21 15:47:05 [TRACE] terraform.NewContext: resolving provider version selections
2019/11/21 15:47:06 [TRACE] terraform.NewContext: loading provider schemas
2019/11/21 15:47:06 [TRACE] LoadSchemas: retrieving schema for provider type "aws"
2019-11-21T15:47:06.053Z [INFO] plugin: configuring client automatic mTLS
2019-11-21T15:47:06.084Z [DEBUG] plugin: starting plugin: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4 args=[/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4]
2019-11-21T15:47:06.084Z [DEBUG] plugin: plugin started: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4 pid=35
2019-11-21T15:47:06.084Z [DEBUG] plugin: waiting for RPC address: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4
2019-11-21T15:47:06.101Z [INFO] plugin.terraform-provider-aws_v2.38.0_x4: configuring server automatic mTLS: timestamp=2019-11-21T15:47:06.101Z
2019-11-21T15:47:06.136Z [DEBUG] plugin.terraform-provider-aws_v2.38.0_x4: plugin address: address=/tmp/plugin617875124 network=unix timestamp=2019-11-21T15:47:06.136Z
2019-11-21T15:47:06.136Z [DEBUG] plugin: using plugin: version=5
2019/11/21 15:47:06 [TRACE] GRPCProvider: GetSchema
2019/11/21 15:47:06 [TRACE] GRPCProvider: Close
2019-11-21T15:47:06.273Z [DEBUG] plugin: plugin process exited: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.38.0_x4 pid=35
2019-11-21T15:47:06.273Z [DEBUG] plugin: plugin exited
2019/11/21 15:47:06 [TRACE] LoadSchemas: retrieving schema for provider type "template"
2019-11-21T15:47:06.273Z [INFO] plugin: configuring client automatic mTLS
2019-11-21T15:47:06.309Z [DEBUG] plugin: starting plugin: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4 args=[/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4]
2019-11-21T15:47:06.310Z [DEBUG] plugin: plugin started: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4 pid=52
2019-11-21T15:47:06.310Z [DEBUG] plugin: waiting for RPC address: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4
2019-11-21T15:47:06.315Z [INFO] plugin.terraform-provider-template_v2.1.2_x4: configuring server automatic mTLS: timestamp=2019-11-21T15:47:06.314Z
2019-11-21T15:47:06.348Z [DEBUG] plugin.terraform-provider-template_v2.1.2_x4: plugin address: address=/tmp/plugin352314194 network=unix timestamp=2019-11-21T15:47:06.348Z
2019-11-21T15:47:06.348Z [DEBUG] plugin: using plugin: version=5
2019/11/21 15:47:06 [TRACE] GRPCProvider: GetSchema
2019/11/21 15:47:06 [TRACE] GRPCProvider: Close
2019-11-21T15:47:06.423Z [DEBUG] plugin: plugin process exited: path=/data/artifactory_web/.terraform/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4 pid=52
2019-11-21T15:47:06.423Z [DEBUG] plugin: plugin exited
2019/11/21 15:47:06 [TRACE] terraform.NewContext: complete
2019/11/21 15:47:06 [TRACE] backend/local: finished building terraform.Context
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x720776]

goroutine 1 [running]:
github.com/zclconf/go-cty/cty/set.Set.Has(0x0, 0x0, 0x0, 0x1ce2f00, 0xc0004c38c0, 0xc0002ca8b3)
/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/set/ops.go:54 +0x26
github.com/zclconf/go-cty/cty.PathSet.Has(...)
/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/zclconf/[email protected]/cty/path_set.go:53
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).pathForcesNewResource(0xc0006c15a8, 0xc0007897a0, 0x1, 0x3, 0x3329800)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:1036 +0xb9
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeAttrDiff(0xc0006c15a8, 0xc000798380, 0x1b, 0xc000a818c0, 0x230ff80, 0xc0000c43a1, 0x1a561c0, 0x3329800, 0x230ff80, 0xc0000c43a1, ...)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:256 +0x3a8
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeBlockBodyDiff(0xc0006c15a8, 0xc000771a80, 0x2310080, 0xc0007c6120, 0x1b10f20, 0xc000b45a10, 0x2310080, 0xc0007c6158, 0x1b10f20, 0xc000a30810, ...)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:196 +0x5d5
github.com/hashicorp/terraform/command/format.ResourceChange(0xc0008ca100, 0x0, 0xc000771a80, 0xc00002c860, 0x4d, 0xc0009f25c0)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/format/diff.go:140 +0x5ff
github.com/hashicorp/terraform/backend/local.RenderPlan(0xc0001b6380, 0xc0000cec90, 0xc000206700, 0x2323d20, 0xc00000df00, 0xc00002c860)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/backend/local/backend_plan.go:288 +0x68c
github.com/hashicorp/terraform/command.(*ShowCommand).Run(0xc0004cf1e0, 0xc0000b2170, 0x1, 0x1, 0xc000084150)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/command/show.go:165 +0xb04
github.com/mitchellh/cli.(*CLI).Run(0xc000470780, 0xc000470780, 0xc00052bd90, 0x1)
/opt/teamcity-agent/work/9e329aa031982669/pkg/mod/github.com/mitchellh/[email protected]/cli.go:255 +0x1f1
main.wrappedMain(0x0)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:238 +0xc34
main.realMain(0x0)
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:102 +0xb4
main.main()
/opt/teamcity-agent/work/9e329aa031982669/src/github.com/hashicorp/terraform/main.go:38 +0x3b
`

@zopanix
Copy link
Contributor

zopanix commented Nov 21, 2019

Hey, I can confirm the same on a terraform show using v0.12.15, we just upgraded from v0.12.12 where we did not have the issue. Unfortunately, I don't want to downgrade, so I'll just comment out the terraform show in the pipeline.

@piotrb
Copy link

piotrb commented Dec 2, 2019

Here is a little bit of information about the changeset that is causing this ..

(output from plan)

  # module.pod.module.jane.module.sidekiq-service-high.module.task_definition.aws_ecs_task_definition.main must be replaced
-/+ resource "aws_ecs_task_definition" "main" {
      ~ arn                      = "arn:aws:ecs:ca-central-1:702929988523:task-definition/s-cac1-ex1-jane_sidekiq_high:18" -> (known after apply)
      ~ container_definitions    = jsonencode(
          ~ [ # forces replacement
              ~ {
                    command          = [
                        "pod-launch",
                        "forerun",
                        "sidekiq_high",
                    ]
                    cpu              = 1024
                    dockerLabels     = {
                        APP_NAME                      = "jane"
                        POD_ENVIRONMENT               = "sandbox"
                        POD_NAME                      = "s-cac1-ex1"
                        POD_REGION                    = "ca-central-1"
                        POD_SERVICE                   = "jane_sidekiq_high"
                        TASK_WORKLOAD_TYPE            = "main"
                        app.jane.app_name             = "jane"
                        com.datadoghq.ad.check_names  = jsonencode([])
                        com.datadoghq.ad.init_configs = jsonencode([])
                        com.datadoghq.ad.instances    = jsonencode([])
                        com.datadoghq.ad.logs         = jsonencode(
                            [
                                {
                                    service = "jane_sidekiq_high"
                                    source  = "ruby"
                                },
                            ]
                        )
                        docker-monitor-oom            = jsonencode(
                            [
                                {
                                    signal    = "SIGINT"
                                    threshold = 95
                                },
                            ]
                        )
                    }
                  ~ environment      = [
                      - {
                          - name  = "AWS_REGION"
                          - value = "ca-central-1"
                        },
                      - {
                          - name  = "POD_SERVICE"
                          - value = "jane_sidekiq_high"
                        },
                      - {
                          - name  = "POD_NAME"
                          - value = "s-cac1-ex1"
                        },
                        {
                            name  = "APP_NAME"
                            value = "jane"
                        },
                      ~ {
                          ~ name  = "POD_ENVIRONMENT" -> "AWS_REGION"
                          ~ value = "sandbox" -> "ca-central-1"
                        },
                        {
                            name  = "LANG"
                            value = "en_US.UTF-8"
                        },
                      + {
                          + name  = "POD_ENVIRONMENT"
                          + value = "sandbox"
                        },
                      + {
                          + name  = "POD_NAME"
                          + value = "s-cac1-ex1"
                        },
                      + {
                          + name  = "POD_SERVICE"
                          + value = "jane_sidekiq_high"
                        },
                        {
                            name  = "SSM_PATHS"
                            value = "/jane,/jane/app.jane,/jane/env.sandbox,/jane/pod.s-cac1-ex1,/jane/env-app.sandbox.jane,/jane/pod-app.s-cac1-ex1.jane,/jane/pod-svc.s-cac1-ex1.jane_sidekiq_high"
                        },
                    ]
                    essential        = true
                    image            = "REDACTED"
                    logConfiguration = {
                        logDriver = "json-file"
                        options   = {
                            max-file = "3"
                            max-size = "10m"
                        }
                    }
                  ~ memory           = 1536 -> 2048
                    mountPoints      = []
                    name             = "jane_sidekiq_high"
                    portMappings     = []
                    volumesFrom      = []
                } # forces replacement,
            ]
        )
        cpu                      = "1024"
        execution_role_arn       = "arn:aws:iam::702929988523:role/s-cac1-ex1-assume-role"
        family                   = "s-cac1-ex1-jane_sidekiq_high"
      ~ id                       = "s-cac1-ex1-jane_sidekiq_high" -> (known after apply)
      ~ memory                   = "1536" -> "2048" # forces replacement
        network_mode             = "bridge"
        requires_compatibilities = [
            "EC2",
        ]
      ~ revision                 = 18 -> (known after apply)
      - tags                     = {} -> null
        task_role_arn            = "arn:aws:iam::702929988523:role/s-cac1-ex1-jane_role"
    }

Something about this changeset is breaking it ..

Terraform will perform the following actions:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x133f0c6]

goroutine 1 [running]:
github.com/zclconf/go-cty/cty/set.Set.Has(0x0, 0x0, 0x0, 0x28c8d60, 0xc000248740, 0xc00125a176)
	/private/tmp/terraform-20191202-98471-1s2t0au/pkg/mod/github.com/zclconf/[email protected]/cty/set/ops.go:54 +0x26
github.com/zclconf/go-cty/cty.PathSet.Has(...)
	/private/tmp/terraform-20191202-98471-1s2t0au/pkg/mod/github.com/zclconf/[email protected]/cty/path_set.go:53
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).pathForcesNewResource(0xc0006654c0, 0xc000491c50, 0x1, 0x3, 0x3e271a0)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/format/diff.go:1036 +0xb7
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeValueDiff(0xc0006654c0, 0x2ed3aa0, 0xc0000d4369, 0x264ae00, 0xc0013d5c20, 0x2ed3aa0, 0xc0000d4369, 0x270aa00, 0x3e271a0, 0x8, ...)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/format/diff.go:1005 +0xf81
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeAttrDiff(0xc0006654c0, 0xc00142d668, 0x3, 0xc000a4b980, 0x2ed3aa0, 0xc0000d4369, 0x264ae00, 0xc0013d5c20, 0x2ed3aa0, 0xc0000d4369, ...)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/format/diff.go:263 +0x4c2
github.com/hashicorp/terraform/command/format.(*blockBodyDiffPrinter).writeBlockBodyDiff(0xc0006654c0, 0xc00115f780, 0x2ed3ba0, 0xc00060e1f0, 0x26f06c0, 0xc0002a15c0, 0x2ed3ba0, 0xc00060e248, 0x26f06c0, 0xc00046b530, ...)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/format/diff.go:196 +0x5d2
github.com/hashicorp/terraform/command/format.ResourceChange(0xc001512300, 0x0, 0xc00115f780, 0xc00121e860, 0x4d, 0xc001294240)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/format/diff.go:140 +0x601
github.com/hashicorp/terraform/backend/local.RenderPlan(0xc0003c21c0, 0xc00060e030, 0xc00128e010, 0x2ee7220, 0xc00036c260, 0xc00121e860)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/backend/local/backend_plan.go:288 +0x67b
github.com/hashicorp/terraform/command.(*ShowCommand).Run(0xc0003196c0, 0xc0000b01a0, 0x1, 0x1, 0xc000592f20)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/command/show.go:165 +0xb02
github.com/mitchellh/cli.(*CLI).Run(0xc000369680, 0xc000369680, 0xc000539cc0, 0x1)
	/private/tmp/terraform-20191202-98471-1s2t0au/pkg/mod/github.com/mitchellh/[email protected]/cli.go:255 +0x1da
main.wrappedMain(0x0)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/main.go:238 +0xc44
main.realMain(0x0)
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/main.go:102 +0xb4
main.main()
	/private/tmp/terraform-20191202-98471-1s2t0au/src/github.com/hashicorp/terraform/main.go:38 +0x3a

@danieldreier
Copy link
Contributor

danieldreier commented Dec 4, 2019

I was able to reproduce this using @valorl's instructions on 0.12.17. I ran a git bisect and found that 9a62ab3 is the first bad commit.

@Lirt
Copy link

Lirt commented Dec 11, 2019

Can we expect hotfix release?

@pselle
Copy link
Contributor

pselle commented Dec 11, 2019

@Lirt No hotfix, this went out earlier today in 0.12.18.

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug cli crash v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.