Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example infrastructure to test with #1

Closed

Conversation

pmoust
Copy link

@pmoust pmoust commented Dec 14, 2017

  • download terraform
  • cp terraform.tfvars.example terraform.tfvars
  • edit terraform.tfvars. The s3_bucket must be unique (s3 has a global namespace)
  • terraform init will fetch the AWS and template terraform providers as well as the infra/ module
  • terraform apply will create a graph diff of the resources on Cloud vs the ones kept in your local terraform state

terraform destroy to tear down your test infra.

This PR enables you to create the necessary resources to test this plugin.
It creates a VPC with proper networking to land in an ELB and an EC2 instance behind it running nginx. It also creates an S3 bucket where logs from ELB are shipped. An SQS queue is also created and notifications from logs arriving to S3 are pushed to that queue.

SSH pub keys are fetched from your github_handle and provisioned in the EC2 instance under the user core.

On terraform apply or terraform show you get the public ip of the EC2 instance if you want to perhaps run logstash from, or edit/debug nginx. You also get the public DNS of the ELB endpoint.

Throw some traffic at the ELB to generate ELB logs.

P.S. it goes without saying that this is not a setup one would ever use in prod, but should serve the purpose of easily spinning and tearing down a full blown AWS environment for your tests.

@pmoust
Copy link
Author

pmoust commented Dec 15, 2017

Added some commits to more easily provide you with the SQS url, and deprovision the s3 bucket without errors.

It takes around 150s to provision/teardown (each action) in case you want to add it in a CI pipeline.

image

image

@jordansissel
Copy link

terraform destroy doesn't work due to this:

* aws_s3_bucket.bucket: Error deleting S3 Bucket: BucketNotEmpty: The bucket you tried to delete is not empty
        status code: 409, request id: 2ED5A3C78E480F27, host id: DQKvTAy9gH9LFbgaEw7QHDm57OFD1lT9qErOxWq0iHzDtmhIBenE5bO54/FaJAewv9+3FWce6RI= "jordansissel.test2.elasticdev.co"

@pmoust
Copy link
Author

pmoust commented Dec 18, 2017

@jordansissel 0ccc3e2 should have addressed that. Could you double check?

@jordansissel
Copy link

Plan: 0 to add, 0 to change, 14 to destroy.
...
* module.infra.aws_s3_bucket.bucket (destroy): 1 error(s) occurred:

* aws_s3_bucket.bucket: Error deleting S3 Bucket: BucketNotEmpty: The bucket you tried to delete is not empty
        status code: 409, request id: E6ACBBC1687A4FEC, host id: oCqzzhsWMCwjTsdwz4JRsZ0NSsyG1ciFzihKG+rV+PiYv5IqA0doSoK4QHHXT3fez6IUO6P2bL4= "jordansissel.test2.elasticdev.co"
⓿ pork(~/projects/logstash-input-s3sqs) pull/1 !127!
% git rev-parse HEAD
7c4483822d82ffa1ab68dd0880b81d372b315fd1

@pmoust
Copy link
Author

pmoust commented Dec 18, 2017

@jordansissel if the bucket was created pre 0ccc3e2 then hashicorp/terraform-provider-aws#208 should fix it.

TL;DR terraform apply to accept the force_destroy = yes then terraform destroy.

If this doesn't work then it seems it's another bug.

We can bypass that by adding a helper calling terraform destroy with some logic around pruning the bucket first.

@jordansissel
Copy link

Destroy complete! Resources: 14 destroyed.

👍

I'll move forward testing.

@elasticsearch-bot
Copy link

Jordan Sissel merged this into the following branches!

Branch Commits
master a4bf6e4, aa72820, ce6d1a5

elasticsearch-bot pushed a commit that referenced this pull request Dec 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants