Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 bucket Error: insufficient items for attribute "destination"; must have at least 1 #9048

Closed
jira-zz opened this issue Jun 19, 2019 · 12 comments
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/s3 Issues and PRs that pertain to the s3 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed. upstream-terraform Addresses functionality related to the Terraform core binary.

Comments

@jira-zz
Copy link

jira-zz commented Jun 19, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

I have two buckets and each one has a replica. I have imported these buckets into a state. Now when I issue terraform plan to update buckets I get the mentioned error.
The error message doesn't make sense. The reported line of error changes.
I don't know what is wrong, but when I remove the bucket the error mentions, plan is generated succesfully. But the configuration for the other bucket is just copy and paste of the other one.

Terraform Version

Terraform v0.12.2

  • provider.aws v2.15.0

Affected Resource(s)

  • aws_s3_bucket

Terraform Configuration Files

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "prod"
  region                  = "eu-west-1"
}

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "prod"
  alias                   = "us"
  region                  = "us-east-1"
}

terraform {
  backend "s3" {
    bucket = "ps-terraform-state-ca770e80-f59b-4281-a74c-00c98ab14017"
    key    = "prod/backups.tf"
    region = "eu-central-1"
  }
}



resource "aws_iam_role" "ps-db-backups-replication" {
  name = "ps-db-backups-replication"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_policy" "ps-db-backups-replication" {
  name = "ps-db-backups-replication"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetReplicationConfiguration",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-db-backups.arn}"
      ]
    },
    {
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetObjectVersionAcl"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-db-backups.arn}/*"
      ]
    },
    {
      "Action": [
        "s3:ReplicateObject",
        "s3:ReplicateDelete"
      ],
      "Effect": "Allow",
      "Resource": "${aws_s3_bucket.ps-db-backups-replica.arn}/*"
    }
  ]
}
POLICY
}

resource "aws_iam_policy_attachment" "ps-db-backups-replication" {
  name       = "ps-db-backups-replication"
  roles      = ["${aws_iam_role.ps-db-backups-replication.name}"]
  policy_arn = "${aws_iam_policy.ps-db-backups-replication.arn}"
}

resource "aws_s3_bucket" "ps-db-backups-replica" {
  bucket = "ps-db-backups-replica-ec8d82b8-8e47-44ed-90f4-73dfc999fac4"
  acl    = "private"
  region = "us-east-1"
  provider = "aws.us"

server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = "AES256"
      }
    }
  }
}

resource "aws_s3_bucket" "ps-db-backups" {
  bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    id      = "transition"
    enabled = true

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 180
    }
  }

replication_configuration {
    role = "${aws_iam_role.ps-db-backups-replication.arn}"

    rules {
      id     = "ps-db-backups-replication"
      status = "Enabled"

      destination {
        bucket        = "${aws_s3_bucket.ps-db-backups-replica.arn}"
        storage_class = "GLACIER"
      }
    }
  }

server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = "AES256"
      }
    }
  }

}


resource "aws_iam_role" "ps-server-backups-replication" {
  name = "ps-server-backups-replication"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_policy" "ps-server-backups-replication" {
  name = "ps-server-backups-replication"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetReplicationConfiguration",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-server-backups.arn}"
      ]
    },
    {
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetObjectVersionAcl"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-server-backups.arn}/*"
      ]
    },
    {
      "Action": [
        "s3:ReplicateObject",
        "s3:ReplicateDelete"
      ],
      "Effect": "Allow",
      "Resource": "${aws_s3_bucket.ps-server-backups-replica.arn}/*"
    }
  ]
}
POLICY
}

resource "aws_iam_policy_attachment" "ps-server-backups-replication" {
  name       = "ps-server-backups"
  roles      = ["${aws_iam_role.ps-server-backups-replication.name}"]
  policy_arn = "${aws_iam_policy.ps-server-backups-replication.arn}"
}

resource "aws_s3_bucket" "ps-server-backups-replica" {
  bucket = "ps-server-backups-replica"
  acl    = "private"
  region = "us-east-1"
  provider = "aws.us"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket" "ps-server-backups" {
  bucket = "ps-server-backups"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    id      = "transition"
    enabled = true

    transition {
      days          = 30
      storage_class = "STANDARD_IA" # or "ONEZONE_IA"
    }

    expiration {
      days = 180
    }
  }

replication_configuration {
    role = "${aws_iam_role.ps-server-backups-replication.arn}"

    rules {
      id     = "ps-server-backups-replication"
      status = "Enabled"

      destination {
        bucket        = "${aws_s3_bucket.ps-server-backups-replica.arn}"
        storage_class = "STANDARD"
      }
    }
  }


}


Debug Output

https://gist.github.com/jira-zz/1d9fecf3de5c877bbb41a7f37e7a8a6d

Expected Behavior

terraform should generate a plan

Actual Behavior

Error: insufficient items for attribute "destination"; must have at least 1

on main.tf line 142, in resource "aws_s3_bucket" "ps-db-backups":
142: server_side_encryption_configuration {

Steps to Reproduce

  1. terraform plan
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Jun 19, 2019
@aeschright aeschright added the service/s3 Issues and PRs that pertain to the s3 service. label Jun 21, 2019
@mzhaase
Copy link

mzhaase commented Jul 9, 2019

Not sure if it is a generic error in the AWS provider or depending on the resource but the same happens with aws_cloudwatch_alarm resource:

module.ecs.aws_cloudwatch_metric_alarm.low-cpu-credits-spot: Refreshing state... [id=ecs-autoscaling-group-spot-cpu-credits-below-30]
--
 
Error: insufficient items for attribute "input_format_configuration"; must have at least 1

Only happens after upgrading to terraform 0.12.

@jleeh
Copy link

jleeh commented Jul 10, 2019

I'm also seeing this on a google_compute_region_instance_group_manager resource.

resource "google_compute_region_instance_group_manager" "this" {
  provider = "google-beta"

  name = "${var.id}-${var.name}-instance-group"

  base_instance_name         = "${var.id}-${var.name}"
  region                     = var.region
  distribution_policy_zones  = data.google_compute_zones.this.names

  target_size  = var.instance_count

  wait_for_instances = true

  version {
    instance_template = google_compute_instance_template.this.self_link
    name              = "latest"
  }

  named_port {
    name = "ui"
    port = 6688
  }
}

Results in:

google_compute_instance_template.this: Refreshing state... [id=<redacted>]

Error: insufficient items for attribute "version"; must have at least 1

Terraform 0.12.3
GCP Beta ~> 2.10

Considering the same error across 3 different resources, is this a generic issue with 0.12+?

@Deblob12
Copy link

Hi, I noticed that deleting the .tfstate file allows terraform plan to work. Any ideas why?

@robmoss2k
Copy link

I'm seeing the same with 0.12.4, but only when I forget to do this for beta resources:

terraform import -provider=google-beta

@danielelisi
Copy link

I was experiencing a similar issue with the resource aws_db_security_group. At every terraform apply I would get the following vague error.

Error: insufficient items for attribute "ingress"; must have at least 1

I worked around it by manually deleting those aws_db_security_group resources from the tfstate and then deleting the RDS security groups from the AWS Console web ui. At this point my next terraform apply recreated those resources and didn't throw any error.

@mzhaase
Copy link

mzhaase commented Jul 29, 2019

Any update on this and related issues of "insufficient items for attribute xyz"? This is making upgrading to TF 0.12 impossible.

@querry43
Copy link

querry43 commented Aug 1, 2019

I have recently encountered this as well. It works in some workspaces but not others when they are functionally identical with their s3 resources. It is in a module and I have several buckers so I cannot determine if there is a specific bucket configuration associated with this.

Reverting to previously known good states does not resolve this. This unfortunately has blocked all further terraform edits or applications.

@bflad
Copy link
Contributor

bflad commented Aug 20, 2019

There are some upstream Terraform issues currently being fixed to cover this (e.g. hashicorp/terraform#22478). When there is an appropriate Terraform CLI or Terraform AWS Provider release that covers this issue, more information will be added here.

@ruudk
Copy link

ruudk commented Feb 28, 2020

We're having the same issue. Is there a workaround that we can use until this is fixed?

@janavenkat
Copy link

+1

@github-actions
Copy link

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale Old or inactive issues managed by automation, if no further action taken these will get closed. label Feb 20, 2022
@github-actions
Copy link

github-actions bot commented May 7, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 7, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/s3 Issues and PRs that pertain to the s3 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed. upstream-terraform Addresses functionality related to the Terraform core binary.
Projects
None yet
Development

No branches or pull requests