Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpolation issue when attaching aws_security_group_rule to aws_security_group created in module #190

Closed
anthonycolon25 opened this issue Dec 10, 2019 · 6 comments
Assignees
Labels
bug waiting for confirmation Workaround/Fix applied, waiting for confirmation
Milestone

Comments

@anthonycolon25
Copy link

anthonycolon25 commented Dec 10, 2019

Description :
I have a TF template which creates an aws_security_group within a module and outputs the security group id (sg_id). From my main TF template at am creating an aws_security_group_rule and associating it to the security group created within the module. I am trying to test that tcp port 22 is not open to 0.0.0.0/0. The tests always passes even when the CIDR block of the aws_security_group_rule is set to 0.0.0.0/0.

If I create the security group within my main TF template and associate the aws_security_group_rule it will correctly fail when set to 0.0.0.0/0.

It would seem that it is not associating the aws_security_group_rule to the module's aws_security_group. Any ideas.

To Reproduce

  1. I was using an older terraform-compliance version (1.0.34) but tried the latest version and get the same result.
Feature: Security Groups Rules should be used to protect services/instances
  In order to improve security
  As engineers
  We'll use AWS Security Group Rules as a Perimeter Defense

  Scenario Outline: Well-known insecure protocol exposure on Public Network for ingress traffic
    Given I have AWS Security Group defined
  	When it contains ingress
    Then it must not have <proto> protocol and port <portNumber> for 0.0.0.0/0

  Examples:
    | ProtocolName | proto | portNumber |
    | HTTP         | tcp   | 80         |
    | HTTPS        | tcp   | 443        |
    | Telnet       | tcp   | 23         |
    | SSH          | tcp   | 22         |
    | MySQL        | tcp   | 3306       |
    | MSSQL        | tcp   | 1443       |
    | NetBIOS      | tcp   | 139        |
    | RDP          | tcp   | 3389       |
    | Jenkins Slave| tcp   | 50000      |

I am attaching my plan.json.

plan.out.json.txt

@eerkunt
Copy link
Member

eerkunt commented Dec 16, 2019

Thanks for the issue 🎉

Security Group functionality is being re-written right now due to few more problems.

I will also add this case as a test to ensure it will always work.

Few more days please :)

@eerkunt
Copy link
Member

eerkunt commented Dec 28, 2019

I have been investigating this bit while also refactoring the Security Groups module - which is nearly done, but it looks like the problem is not about the Security Groups, the problem is about resource mounting.

Is it possible to share the HCL code for aws_security_group.elb and aws_security_group_rule.elb-ingress-ssh ? It looks like module.mysg.sg_id is for referencing aws_security_group.elb (just a guess) but I can't find any definiton about that module in the plan.out.json

@anthonycolon25
Copy link
Author

anthonycolon25 commented Jan 6, 2020

Here are the snippets of HCL code. I have a secgroup module (directory) with a main.tf and an outputs.tf file.

modules/secgroup/main.tf:

resource "aws_security_group" "alb" {
  name   = "submodule-sg"
  vpc_id = "vpc-12345"

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    # cidr_blocks = ["0.0.0.0/0"]
    cidr_blocks = ["10.0.0.0/8"]
  }

  lifecycle {
    create_before_destroy = true
  }
}

modules/secgroup/outputs.tf:

output "sg_id" {
  value = "${aws_security_group.alb.id}"
}

Here is the relevant HCL code from my main.tf:

provider "aws" {
  region = "us-east-1"
}

module mysg {
  source = "./modules/secgroup"
}

## Security Group for ELB
resource "aws_security_group" "elb" {
  name = "Terraform-example-elb"
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    # cidr_blocks = ["0.0.0.0/0"]
    cidr_blocks = ["10.0.0.0/8"]
  }
  
#   tags = module.tagsmod.tags
#   tags = module.label.tags
  tags = {
    Name = "allow_all"
    role = "sgrole"
    BusinessUnit = "IT"
  }
}

resource "aws_security_group_rule" "elb-ingress-ssh" {
  type              = "ingress"
  from_port         = 22
  to_port           = 22
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
  # security_group_id = aws_security_group.elb.id
  security_group_id = module.mysg.sg_id
}

Please let me know if you need any additional information

@eerkunt eerkunt mentioned this issue Jan 6, 2020
18 tasks
@eerkunt eerkunt added this to the 1.1.0 milestone Jan 6, 2020
@eerkunt
Copy link
Member

eerkunt commented Feb 1, 2020

Hi @anthonycolon25,

Can you please have a try with the new release ? 🎉

@eerkunt eerkunt added the waiting for confirmation Workaround/Fix applied, waiting for confirmation label Feb 1, 2020
@eerkunt
Copy link
Member

eerkunt commented Feb 16, 2020

Closing the issue due to inactivity. Please do not hesitate to open a new issue if the problem still persists.

Thanks!

@eerkunt eerkunt closed this as completed Feb 16, 2020
@ghost
Copy link

ghost commented Feb 16, 2020

This issue's conversation is now locked. If you want to continue this discussion please open a new issue.

@ghost ghost locked and limited conversation to collaborators Feb 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug waiting for confirmation Workaround/Fix applied, waiting for confirmation
Projects
None yet
Development

No branches or pull requests

2 participants