-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attach an EBS volume to the monolith #984
Conversation
By analyzing the blame information on this pull request, we identified @reset to be a potential reviewer |
@@ -20,6 +20,21 @@ resource "aws_instance" "monolith" { | |||
agent = "${var.connection_agent}" | |||
} | |||
|
|||
ebs_block_device { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is what we actually want to do here. This will define an ebs volume for us but it's lifecycle is part of the instance. I think the route is to rather rather an ebs volume separately and attach it with these resouces:
- https://www.terraform.io/docs/providers/aws/r/ebs_volume.html
- https://www.terraform.io/docs/providers/aws/r/volume_attachment.html
I believe this is the correct approach when you need a persistent file system regardless of the instance owning it or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent most of today playing with this. I think having the ebs volume as part of the instance lifecycle is the right thing, because if we're treating any volume created outside the instance as an attachment with it, then we're basically doing that anyway, but we run into provisioning timing issues and workarounds. Specifically, this issue is precisely what we have to deal with, which requires something like this:
resource "template_file" "monolith_userdata" {
template = "${file("${path.module}/scripts/habfs.sh")}"
}
resource "aws_instance" "monolith" {
// ...
user_data = "${template_file.monolith_userdata.rendered}"
// ...
}
Where habfs.sh
looks like this:
#!/bin/bash
while [ `lsblk -n | grep -c 'xvdf'` -ne 1 ]
do
echo "Waiting for /dev/xvdf to become available"
sleep 10
done
mkfs.ext4 /dev/xvdf
mount /dev/xvdf /mnt
echo '/dev/xvdf /hab ext4 defaults 0 0' | tee -a /etc/fstab
mkdir -p /mnt/hab
ln -s /mnt/hab /hab
But then we have to handle getting the initial hab install in the right place as the above happens after the instance is provisioned.
Overall I think having the EBS volume with the lifecycle of the instance the right thing, and if it gets destroyed in acceptance or other non-production/live environments, it's probably not a big deal.
For production/live, we can create volumes out of a snapshot. We should set up some kind of snapshot backup schedule. I created a snapshot earlier for testing, and in case something happens we have it available at least.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jtimberman I think you're right, there doesn't seem to be a nice way to express what we want with Terraform right now so we should just go ahead and make the EBS part of the lifecycle of the instance
In the incident postmortem from 2016-06-23 it was identified that we need to attach an EBS volume. Signed-off-by: jtimberman <[email protected]>
f82c9d6
to
8f3eb0a
Compare
In the incident postmortem from 2016-06-23 it was identified that we
need to attach an EBS volume.
During instance creation we need to useWe'll use gp2 as the type so we can get moar iops and 1.5TB size.1 TB even though 1.5 TB was used when we remediated the issue, because
anything larger than 1 TB was causing instance creation to fail with
terraform apply.
Signed-off-by: jtimberman [email protected]