-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_route53_resource with alias to ELB has changed after 0.7.5 upgrade #9289
Comments
I met the same issue after upgrading to 0.7.5 today and actually ran "terraform apply". During "apply", terraform was telling that it has made the changes but nothing actually happened. When running
|
I'm seeing the same thing on 0.7.5. |
Hi all! Sorry this is an issue! Not too sure when this started happening, but seems like the DNS name in ELB is not being read using the correct process:
Not too sure if this is 100% the issue, but I will try and set up a test when I have a chance here and correct if I can re-produce it, unless someone beats me to it. ;) |
Hey all, So, I think I jumped the gun on this one. I can't reproduce this, neither in an acceptance test nor by manual execution of a config from 0.7.4, an upgrade to v0.7.5, and then a The only situation that I'm observing the behavior in is if the Route53 alias is added through the console. In that situation, the console will prepend Can you give us some more details?
Thanks! Also, here is the config I used to test with - it's basically a slight mod of one of the ELB acceptance tests. Not a 1-to-1 but the important parts should be there.
|
I appreciate your help, @vancluever!
I have also witnessed your observation of how the AWS console prepands When I use v0.7.5 of Terraform, a I suspect that issue #9108 was hiding the underlying issue. Since #9108 was fixed via ae2b8d4, and rolled into v0.7.5, the actual issue (that I'm currently observing) has started happening. |
Just to add a little extra data here, my situation is nearly identical to @mioi.
It's also worth noting that the zone_id (since redacted, you can't tell in the issue) is changing when this happens. These aren't personal so it's fine to share them.
This is in addition to the "dualstack" removal piece. A couple other notes that may match up with others reporting the problem:
I'm wondering if the problem is more about the zone_id of an existing ELB's zone_id property than it is about a new DNS entry by chance? |
Thanks for the info @mioi and @jaygorrell! It definitely has been in the back of my mind that this is more AWS related than Terraform related, with a dash of #9108 causing some issues here as well. I guess the best way to confirm this would be to roll back to v0.7.4 and run Unfortunately this probably means that my changes of re-producing this from scratch are pretty much nil. :( If the zone ID is changing, it could be that AWS has changed the ELB zone IDs on their side. Hopefully, that's something that AWS support could answer. At the very least they could confirm that the modification that TF is trying to carry out would not be harmful to your service, and then you could just carry out the apply and have the new state written. Hopefully that's enough to address it, if there's perpetual diff then that's another issue. PS: I did just check the test config again, and |
Unfortunately, applying doesn't do anything as @loivis mentioned:
The change comes up in the plan continuously. I'll try to do more testing later today. |
I've been seeing a lot of this in our stuff... we noticed it over the last few days, but possibly it was happening since we upgraded to 0.7.5. Two separate examples, showing it moving from
These were grabbed by my co-workers and sent over to me, so I don't have much more to report but it is clear from these that it's happening to us in both us-west-1 and us-west-2. My (unsubstantiated) theory was that something was changing over in AWS over to a new hostname scheme and zone, and during the transition:
I found some docs that talk about the Perhaps someone with access to non-useless AWS support could get some thoughts from the support team on this? 😀 |
I, too, am having this problem. I just imported a previously unmanaged ELB into Terraform today. The old Route53 record was errantly a CNAME record, instead of alias... So Terraform had to delete the record and create a new one.
If I apply this and I have another ELB with route53 alias, that is configured very much the same. The only difference, I created it with 0.7.4. I just did a Strange. |
This SO post touches on a related issue here, showing it's likely nothing new on the AWS side. It seems the issue here is that Terraform is trying to change the hosted zone for these alias records (Z1H1FL5HABSF5) to the hosted zone for ELBs. In my case, that's I'm not sure if it's related but I did see a somewhat related change in 0.7.5: Pinging @stack72 on perhaps some thoughts if it could possibly be related or not. |
Still happens with 0.7.6 =( |
That's what I believe as well, pinged AWS support about it. |
This is hitting us as well. us-east-1. We can't rollback to a previous version because we are using the new us-east-2 region as well :( |
Some more info I found: aws/aws-cli#1761 (comment) |
Ok, digging through my LBs, I found that one created on 8/4/2016 returns Z3DZXE0Q79N41H for us-east-1, while one create on 8/31/2016 returns Z35SXDOTRQ7X7K. Z35SXDOTRQ7X7K is the value that Route 53 returns for the ELB alias zone regardless of when the ELB was created. |
We are also seeing this issue on the latest terraform (0.7.7) but only with some of our aliases/elb's. Seems like it might be related to when they were originally created? I haven't found a work-around yet and even tried deleting the DNS record and having terraform re-create it. The drift still exists afterwards. I'm wondering if destroying and re-creating the ELB itself would work? Unfortunately, that's not really a viable solution for anything in production. |
@pickgr yes, deleting the ELB and re-creating it with correct the issue. I have verified this. |
I think this changed occurred with the introduction on ALB on August 11. |
I tainted all our ELBs and had TF recreate them, but the DNS records still get modified on every run. |
It seems like PR #9704 should have fixed this, in v0.7.8, but the problem persists. |
Closed via #9704 - please let us know if there is still an issue! |
@stack72 I don't see how this will fix the issue where the ELB hosted zone is different between what the ELB reports and what Route53 says as I noted above. |
ok so we have 2 different issues it seems - 1 is the prepending of dualstack and the other is the change of the hosted zone id Back to the drawing board on this one |
FTR, the consistent diff of the hosted zone id is somewhat explained in aws/aws-cli#1761 (comment). It seems like AWS is mid-transition to public hosted zones. The unexpected hosted zone id's are even listed in their documentation here: Also, FTR, I created my ELB some time around Jul or early Aug, 2016. For an ELB I created in mid-Sept., I do not see this issue. |
@mioi good find. I have a weird twist on this. We have 3 envs where I create two A aliases to the same ELB and are getting this error on all 3. 2 are in us-east-1 and 1 in eu-west-1. Of the 2 in us-east-1, for the one with a newer ELB (June 2016 vs May 2016) only one alias record gets this consistent diff - the other envs get diffs for both. |
Possibly relevant post on the google groups: https://groups.google.com/forum/#!topic/terraform-tool/WJevMu-vNso @mioi - you might want to have a look at it and talk to Mitchell about it (assuming this is the same thing) @jaygorrell - my situation seems to be quite similar to yours - can you have a look at your state file and console for your route53 record and see if they're inconsistent? |
Still happening in 0.7.11
|
This is still happening for me as well. Terraform v0.7.9
|
You can work around this issue by setting the |
Thanks @ryansch, that workaround does the job. Not nice, but working ;-) |
I also found a "workaround". I essentially re-created the affected ELBs. After that, I was able to update our version of terraform to 0.7.13 and no longer see the diffs in |
+1 I have been experiencing the same problem in every terraform version since 0.7.5 up to 0.7.13. The temporary workaround by calling lower() around your ${aws_elb.elb-name.dns_name} will cause the name to not want to become uppercase again, however there is a trailing period that AWS appends to the alias name in Route53. Additionally, the hosted zone id of the ELB still wants to be changed from the correct R53 zone (according to the elb region doc posted in previous articles) and the current zone id of the ELB endpoint DNS. Heres an example below:
Due to the number of ELBs I have, re-creating them isn't a feasible workaround. The zone id changing problem lies mainly in AWS. when running awscli to describe the above ELB, the output will contain the "CanonicalHostedZoneNameID" value which although should be the Z35SXDOTRQ7X7K value, is actually Z3DZXE0Q79N41H. I have confirmed that the creation of a new ELB will have the CanonicalHostedZoneNameID be correct and ending with 7X7K but I really hope we wont have to re create these to make that change retroactively. I believe this zone id is the zone id for any ELB built before the rollout of ALBs. Any new ELB is placed in a new hosted zone. As for the lowercasing and appending a period to the record, Im sure that can be done in the ELB resource code. |
@mioi what do you mean with "essentially re-created"? I tried with tainting the ELB, which destroys and re-recreates it, but that didn't fix it for me... |
Thats interesting that it didn't fix it for you @jangrewe. When i write a new ELB (regardless of configuration) it places the hosted zone id in the correct one ending in 7X7K and doesn't try to modify the DNS entry name. Additionally, I tainted one of our non essential ELBs and after it was re created, it too had the correct hosted zone id and DNS name and now its associated r53 record does not want to change every plan and apply. My issue with this solution is that it involves the destruction and re creation of every ELB pre the rollout of ALBs. I have a ton of them many of which I would have to spin up a new one by its side and change DNS before killing the old one off to avoid losing traffic during the short downtime and DNS propagation window. For anyone who doesn't mind having to taint each one and re create them, then this seems like a proper solution. I just wonder if AWS has any plans to migrate existing ELB DNS endpoints to its new hosted zone ID. EDITED: to clarify by the way, tainting and re applying the resource only fixed the hosted zone id problem. Combine a tainting with the lower() around the ELB DNS name and it fixed both my issues for the ELB I tainted. |
an additional solution that I'm probably going to implement is to make a map of the current hosted zone IDs and instead of passing the ELBs hosted zone id as the entry, pass the value from the map. so instead of doing this
where test-elb is the problematic ELB with the wrong case name and hosted zone ID, do this instead:
Combined with the lower() call and appending a period, this is an effective workaround for me that didn't involve tainting or otherwise modifying the existing ELB. it is of course a code change for EVERY ELB aliased DNS record I have but I'm more okay with that considering it doesn't mess with the existing ELB. I gathered this map of zone IDs from the AWS doc in previous comments. |
Hi all this is correct - in order to attempt to normalize the Alias records, we were dropping the dualstack from the start of it. A dualstack record has a different hosted zone id. I am on the path to rectifying this right now Thanks for the understanding and apologies for the time taken here - I just opened a PR for a datasource that will do what @nckslvrmn is doing above Almost ready here |
Thanks @stack72. I also noticed that the casing change is still happening. So my records are all changing on every plan and apply still as shown below:
|
Looking forward to this being resolved... :) Has been there a little while now. |
It looks like you can now use aws_elb_hosted_zone_id. See; #11027 |
I have the same problem for a couple of ELB's in us-west-1. These ELBs were all created in April and May of 2016, and they were given a CanonicalHostedZoneNameID of Z1M58G0W56PQJA. Example:
However, ELBs that were created in June 2016 have a different CanonicalHostedZoneNameID:
This id corresponds to AWS's Route53 Zone ID published here: https://docs.aws.amazon.com/general/latest/gr/rande.html#elb_region (Because the zone id for ELBs are publicly available there, there is no need for you to censor it. It only reveals in what region your ELB is in.) My guess is that AWS for some reason switched from Z1M58G0W56PQJA to Z368ELLRRE2KJ0 starting around June 2016. For what reason, I have no clue. I tried using archive.org to get the old zone id on that page, but at that time they didn't publish that information: https://web-beta.archive.org/web/20160626023414/https://docs.aws.amazon.com/general/latest/gr/rande.html#elb_region But if you google "Z1M58G0W56PQJA", you get a lot of results. Anyway, to save customers from issues, AWS will transparently translate Z1M58G0W56PQJA to Z368ELLRRE2KJ0 behind the scenes when you create an alias record. I have found that they do this for other types of API calls too, so it seems to be a somewhat common (but undocumented) practice. It would have been nice if AWS had updated the CanonicalHostedZoneNameID property on all old ELBs, but maybe that would have the chance of disrupting things even more. Have anyone tried opening a ticket with AWS to see if they can update this behind the scenes? The best solution is probably to use the aws_elb_hosted_zone_id datasource that @stack72 created: #11027 Another fix might be to make Terraform ignore changing the zone_id in certain cases. There would have to be a hard-coded list of old zone ids. Not sure if it's worth the effort though. I tried putting:
in my The final option is to re-create your ELB. Edit: I made a terrible proof of concept for the fix I suggested above: https://github.com/stefansundin/terraform/commit/549db9ff1a4a7f795d4eb25ee2a73bb23197d969, and it works :) But it's still not pretty. |
I have, yes. The response I got was basically that "things changed" and older ELBs will return an older Unfortunately this means we're also looking at using |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Terraform v0.7.5
Affected Resource(s)
Terraform Configuration Files
Debug Output
(too much to redact)
Panic Output
n/a
Expected Behavior
It should return the No changes. Infrastructure is up-to-date. message when running
terraform plan
.Actual Behavior
It shows that it wants to do this:
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
Important Factoids
it seems like the main difference (as shown in the
terraform plan
is that previously, with 0.7.4, it prependeddualstack.
to the alias name of the route53 record when you ranterraform apply
. However, the.tfstate
file did not reflect this. With 0.7.5, it seems to want to remove thedualstack.
prefix from the ELB name.References
n/a
The text was updated successfully, but these errors were encountered: