-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/devp2p: submit Route53 changes in batches #20524
Conversation
This works around the 32k RDATA character limit per change request.
Code LGTM. I do have a question around atomicity. Previously we didn't care about the order of the changes because we just applied everything or nothing. With the chunks introduced in this PR, we need to ensure that failures don't leave us in an inconsistent state. I'm not sure how the rest of the code works currently, but I guess the first entries in the batches should be adding new nodes, then the switchover to the root, then the deletions. This way a crash at any point will still keep things consistent. Is the rest of the route53 code structured to order it like this? |
Yes, the changes should be ordered like that, but there is no explicit sorting. I will add that. |
Maybe it is also time to add a test, this code is getting complicated... |
763a4c1
to
b9c87b0
Compare
Turns out the changes were not sorted correctly. Now they are, and we have a test. @karalabe PTAL. |
@@ -0,0 +1,124 @@ | |||
package main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls add the copyright header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from a missing file header, LGTM
Deletion requests are not accepted unless both TTL and value match the existing record. Also improve tests to catch this case.
Added copyright header and one more bug fix. |
Seems like we're not measuring RDATA size the same way the AWS API does. Just use a lower limit to make it work.
And one more bug fix. |
This change works around the 32k RDATA character limit per change request and fixes several issues in the deployer which prevented it from working for our production trees.
This works around the 32k RDATA character limit per change request.