Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_launch_configuration.lc: diffs didn't match during apply #5686

Closed
madAndroid opened this issue Mar 17, 2016 · 2 comments
Closed

aws_launch_configuration.lc: diffs didn't match during apply #5686

madAndroid opened this issue Mar 17, 2016 · 2 comments

Comments

@madAndroid
Copy link

Hi,

We've got a problem when updating a module's security groups parameter; the module has a launch configuration component, and we're passing a comma separated list of security_group ids to the module.

The module implementation:

module "jenkinsmasterelb" {
  source                      = "../../tfmodules/tf-aws-asg-elb"
  user_data                   = "${template_file.jenkinsmaster_user_data.rendered}"
  asg_name                    = "jenkins"
  product                     = "${var.product}"
  envname                     = "${var.envname}"
  account                     = "${var.account}"
  region                      = "${var.region}"
  domain                      = "jenkins.${format("%s.%s.%s", var.envname, var.product, var.root_presentationdomain)}"
  role                        = "jenkins"
  availability_zones          = "${var.aws_availability_zones}"
  elb_subnets                 = "${module.public_subnet.public_subnet_ids}"
  elb_security_groups         = "${module.sg_common.security_group_id_common},${module.sg_web.security_group_id_web}"
  elb_dns                     = "1"
  route53_zone_id             = "${module.route53_root_presentation.root_zone_id}"
  health_check_grace_period   = "600"
  elb_port                    = "80"
  elb_proto                   = "tcp"
  backend_port                = "80"
  backend_proto               = "tcp"
  backend_target              = "TCP:80"
  key_name                    = "${var.ssh_privkey_name}"
  ami_id                      = "${var.ami}"
  health_check_type           = "EC2"
  instance_type               = "${var.jenkinsmasterelb_instance_type}"
  iam_instance_profile        = "${module.iam_profile_updater.profile_id}"
  security_groups             = "${module.sg_ssh.security_group_id_ssh},${module.sg_common.security_group_id_common},${module.sg_jenkins.security_group_id_web}"
  subnets                     = "${module.private_subnet_nat.private_subnet_ids}"
  asg_min                     = "${var.jenkinsmaster_asg_min}"
  asg_max                     = "${var.jenkinsmaster_asg_max}"
}

When a security group is replaced with a different one, the following error is shown when attempting to apply:

    Terraform Version: 0.6.13
    Resource ID: aws_launch_configuration.lc
    Mismatch reason: diff: Destroy; old: false, new: true
    Diff One (usually from plan): *terraform.InstanceDiff{Attributes:map[string]*terraform.ResourceAttrDiff{"security_groups.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}}, Destroy:false, DestroyTainted:false}
    Diff Two (usually from apply): *terraform.InstanceDiff{Attributes:map[string]*terraform.ResourceAttrDiff{"instance_type":*terraform.ResourceAttrDiff{Old:"t2.medium", New:"t2.medium", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "image_id":*terraform.ResourceAttrDiff{Old:"ami-7304bf00", New:"ami-7304bf00", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "security_groups.2001610947":*terraform.ResourceAttrDiff{Old:"", New:"sg-852329e1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Type:0x0}, "key_name":*terraform.ResourceAttrDiff{Old:"itv-online-hubsvc-dev-keypair", New:"itv-online-hubsvc-dev-keypair", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "ebs_optimized":*terraform.ResourceAttrDiff{Old:"false", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "enable_monitoring":*terraform.ResourceAttrDiff{Old:"false", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "security_groups.626409869":*terraform.ResourceAttrDiff{Old:"sg-ccbceea8", New:"sg-ccbceea8", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"3", New:"3", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "security_groups.1054736012":*terraform.ResourceAttrDiff{Old:"sg-c0bceea4", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "associate_public_ip_address":*terraform.ResourceAttrDiff{Old:"false", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"terraform-iuzr5btshjhhrdnp3feybls5jy", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "iam_instance_profile":*terraform.ResourceAttrDiff{Old:"infradev_iam_profile_updater-profile", New:"infradev_iam_profile_updater-profile", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"072318a416d6dc21963ca3139736ccd2b72ad853", New:"072318a416d6dc21963ca3139736ccd2b72ad853", NewComputed:false, NewRemoved:false, NewExtra:"#!/bin/bash\n\n#VARS\nexport AWS_DEFAULT_REGION=\"eu-west-1\"\n\n#\n# Hostname\n#\n\nINSTANCEID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id | sed -e 's/-//')\nPRIVIP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)\n\nNEWHOST=\"jenkins-$INSTANCEID\"\nNEWHOSTNAME=$(echo $NEWHOST | sed -e 's/_services/srv/')\nsed -i '/HOSTNAME/d' /etc/sysconfig/network\necho HOSTNAME=$NEWHOSTNAME >> /etc/sysconfig/network\nhostname $NEWHOSTNAME\n\nsed -i /$PRIVIP/d /etc/hosts\necho $PRIVIP $NEWHOST.awseuwest1.infradev.hubsvc.itvcloud.zone $NEWHOST >> /etc/hosts\n\nservice network restart\n\n#\n# Create facts\n#\n\nmkdir -p /etc/facter/facts.d\necho 'location=awseuwest1' > /etc/facter/facts.d/aws.txt\necho 'role=jenkins' >> /etc/facter/facts.d/aws.txt\necho 'env=infradev' >> /etc/facter/facts.d/aws.txt\necho 'product=hubsvc' >> /etc/facter/facts.d/aws.txt\necho 'domain=awseuwest1.infradev.hubsvc.itvcloud.zone' >> /etc/facter/facts.d/aws.txt\necho 'region=eu-west-1' >> /etc/facter/facts.d/aws.txt\n\n#\n# If I am a bastion host attach an ENI.If I am not continue...\n#\n\nif grep --quiet bastion /etc/facter/facts.d/aws.txt; then\n /bin/attacheni.sh\nelse\n echo \"i am not a bastion so I don't attach any ENI\"\nfi\n\n#\n# TODO: If I am a consul host attach bootstrap me...\n#\n\n#if grep --quiet bastion /etc/facter/facts.d/aws.txt; then\n#  CONSUL_SERVERS=$(aws --region eu-west-1 ec2 describe-instances \\\n#    --filters \\\n#    \"Name=tag:Service,Values=consulserver\" \\\n#    \"Name=instance-state-name,Values=running\" | \\\n#      jq -r '.Reservations[].Instances[].PrivateIpAddress' \\\n#  )\n#else\n# echo \"i am not a consul server...\"\n#fi\n\nmkdir -p /etc/itv/misc\ncat <<DNS_UPDATE_CONFIG > /etc/itv/misc/dns_update_config.yaml\n---\n:internal_domain: awseuwest1.infradev.hubsvc.itvcloud.zone\n:internal_zone_id: Z1HEOBQ4HRWMTJ\n:public_domain: infradev.hubsvc.itv.com\n:public_zone_id: Z8ZZR4B2WSNKP\n:product: hubsvc\n:service: \n:end_points: \"jenkins\"\n:public_end_points: \n:environment: infradev\nDNS_UPDATE_CONFIG\n\n## Horrible to have to do this, but sometimes metadata/tags are not available immediately...\n/bin/sleep 5\n\n## Create DNS records in R53:\n/usr/bin/manage_dns.rb\n\n#\n# Install hubsvc puppet and run puppet\n#\n\ncat <<PUPPETREPO > /etc/yum.repos.d/artifactory_puppet.repo\n[artifactory_puppet]\nname=artifactory_puppet\nbaseurl=\"https://itvrepos.artifactoryonline.com/itvrepos/hubsvc-manifests-infradev\"\nmetadata_expire=10s\nenabled=1\ngpgcheck=0\nusername=hubsvc-ro\npassword=R4nd0m1z3\nPUPPETREPO\n\nyum -y install hubsvc-puppet\n\npuppet_state_file='/var/lib/puppet/state/last_run_summary.yaml'\nmco_puppet_lockfile='/var/run/puppet.lock'\n\n[ -d /var/lib/puppet/state ] || mkdir -p /var/lib/puppet/state\n\nset +x\n\n[ -f $puppet_state_file ] && rm -fv $puppet_state_file\n[ -f $mco_puppet_lockfile ] && rm -fv $mco_puppet_lockfile\n\n## Run puppet via masterless wrapper\nlogger -s -t hubsvc-infra '### Running puppet ###'\n\ncount=1\n\nuntil [ $count -ge 4 ]; do\n\n  /bin/puppet-masterless-mco\n\n  logger -s -t hubsvc-infra '### Waiting on puppet run to complete ###'\n\n  until [ -f $puppet_state_file ]; do\n    sleep 0.5\n  done\n\n  logger -s -t hubsvc-infra \"### Puppet run number: $count complete ###\"\n\n  ## If we exit cleanly, log that to syslog, exit loop\n  /usr/local/bin/puppet-status.rb \\\n    && logger -s -t hubsvc-infra '### Puppet run finished without any errors ###' \\\n    && break\n\n  ## Our puppet run wasn't clean, remove state/lock files, random sleep,\n  ## increment counter, and re-enter loop\n  logger -s -t hubsvc-infra '### Puppet run exited with non-zero exit code ... retrying ###'\n\n  [ -f $puppet_state_file ] && rm -fv $puppet_state_file\n  [ -f $mco_puppet_lockfile ] && rm -fv $mco_puppet_lockfile\n\n  sleep $(( $RANDOM % 300 ))\n  count=$((count+1))\n\n  ### Silence ourselves in Sensu:\n  #sensu-cli silence $(facter fqdn) --owner root --reason \"This server was just created\" --expire 3600\n\n  if [ $count -ge 4 ]; then\n    logger -s -t hubsvc-infra 'Fatal: Puppet run failed on try number $count ... giving up'\n  fi\n\ndone\n\nset -x\n\n#/usr/local/libexec/mcollective/refresh-mcollective-metadata\n#sleep 3\n#\n#systemctl restart mcollective\n\nlogger -s -t hubsvc-infra '#####################'\nlogger -s -t hubsvc-infra '### USERDATA DONE ###'\nlogger -s -t hubsvc-infra '#####################'\n", RequiresNew:false, Type:0x0}, "security_groups.1196199561":*terraform.ResourceAttrDiff{Old:"sg-c2bceea6", New:"sg-c2bceea6", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Type:0x0}}, Destroy:true, DestroyTainted:false}

State is stored in S3, and accessed with terraform_remote_state resources.

The security group module implementation:

## Jenkins specific implementation of sg_web, to allow access from QA and STG
module "sg_jenkins" {
  name        = "jenkins_master"
  source      = "../../tfmodules/tf-aws-sg/sg_web"
  vpc_id      = "${module.vpc_vpn.aws_vpc_id}"
  source_cidr = "${var.aws_vpc_cidr},${var.itv_networks},${var.admin_networks},${terraform_remote_state.qa.output.aws_vpc_cidr_block},${terraform_remote_state.stg.output.aws_vpc_cidr_block}"
}

Interestingly, this doesn't happen when I add a new group, instead of changing an existing group, so that's what I've done for now to work around this - having the existing group in place is not an issue for us.

@mitchellh
Copy link
Contributor

Tracing this in #10192. Same error, but on a newer version.

@ghost
Copy link

ghost commented Apr 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants