Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add block_types support in terraform resource schema #35

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

dorkamotorka
Copy link

SUMMARY

If you run terraform providers schema -json each terraform resource can have either the attributes or the block_types keys. At least this is how it is for the terraform-lxd-provider. Currently this collection only handle the attributes key.
This PR adds support to read and sanitize block_types inside this collection.

ISSUE TYPE
  • Feature Pull Request
COMPONENT NAME
  • plugins/modules/terraform.py
  • plugins/module_utils/models.py
ADDITIONAL INFORMATION

Consider the lxd-profile resource where e.g. the name is under attributes key, but the device is under block_types.

Example resource from terraform-lxd-provider:

resource "lxd_profile" "profile" {
  name     = "some-name"

  device {
    type = "disk"
    name = "root"

    properties = {
      pool = "default"
      path = "/"
    }
  }
}

Before applying this patch, the device attribute in the lxd-profile resource was not recognized by this collection and this ansible collection fails with an error.

@dorkamotorka
Copy link
Author

This should be considered an initial draft and if someone can have a look at the proposed changes and give an opinion I'm happy to apply changes further.

@@ -240,6 +240,7 @@ def workspace_list(self) -> TerraformWorkspaceContext:
continue
elif stripped_item.startswith("* "):
current_workspace = stripped_item.replace("* ", "")
all_workspaces.append(current_workspace)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is unrelated to the PR, but I noticed that the current_workspace doesn't get added to all_workspaces.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. It would be nice to at least have a unit test for this.

@softwarefactory-project-zuul
Copy link

Build failed.

✔️ build-ansible-collection SUCCESS in 4m 05s
ansible-test-sanity-docker-devel FAILURE in 8m 05s (non-voting)
ansible-test-sanity-docker-milestone FAILURE in 8m 00s (non-voting)
ansible-test-sanity-docker-stable-2.13 FAILURE in 8m 18s
ansible-test-sanity-docker-stable-2.14 FAILURE in 7m 07s
ansible-test-units-cloud-terraform-python38 FAILURE in 5m 37s
ansible-test-changelog FAILURE in 2m 35s
✔️ ansible-galaxy-importer SUCCESS in 4m 29s

@softwarefactory-project-zuul
Copy link

Build failed.

✔️ build-ansible-collection SUCCESS in 3m 25s
✔️ ansible-test-sanity-docker-devel SUCCESS in 9m 09s (non-voting)
✔️ ansible-test-sanity-docker-milestone SUCCESS in 7m 18s (non-voting)
✔️ ansible-test-sanity-docker-stable-2.13 SUCCESS in 9m 12s
✔️ ansible-test-sanity-docker-stable-2.14 SUCCESS in 7m 27s
ansible-test-units-cloud-terraform-python38 FAILURE in 6m 26s
ansible-test-changelog FAILURE in 2m 25s
✔️ ansible-galaxy-importer SUCCESS in 4m 51s

@softwarefactory-project-zuul
Copy link

Build succeeded.

✔️ build-ansible-collection SUCCESS in 3m 43s
✔️ ansible-test-sanity-docker-devel SUCCESS in 7m 09s (non-voting)
✔️ ansible-test-sanity-docker-milestone SUCCESS in 6m 51s (non-voting)
✔️ ansible-test-sanity-docker-stable-2.13 SUCCESS in 7m 09s
✔️ ansible-test-sanity-docker-stable-2.14 SUCCESS in 7m 20s
✔️ ansible-test-units-cloud-terraform-python38 SUCCESS in 5m 21s
✔️ ansible-test-changelog SUCCESS in 2m 10s
✔️ ansible-galaxy-importer SUCCESS in 4m 26s

@dorkamotorka
Copy link
Author

I fixed the unit-tests as well but I think they're not sufficient at the moment since the chosen provider registry.terraform.io/hashicorp/local omits key block_types to test it accurately. The question for me is why in the first place this provider was chosen (I assume just randomly?) and if I should modify the unit-tests to utilize the e.g. terraform-lxd-provider or some else that also includes the block_types key?

@dorkamotorka
Copy link
Author

@sstanovnik @PolonaM Any thoughts on this?

@dorkamotorka
Copy link
Author

Hey, been a long time since this had any movement. Is someone willing to check this and merge, before there are many more conflict going on?

Copy link
Member

@gravesm gravesm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay. I'm happy to work on getting this reviewed and merged. Overall, I think this looks fine. I highlighted a bug in the existing codebase that's replicated in this PR. Though the original bug wasn't your problem, I'd prefer to not add more code that just expands the scope of that bug.

In terms of testing, I don't think the unit tests are actually running any terraform commands, so there's no real significance to using local_file other than it was easy. It would be nice to have an integration test that covers this behavior, though. The easiest way to do that would probably be to use the aws provider, because CI is already set up to run integration tests in aws. I can't think of an aws resource with a nested block marked as sensitive off the top of my head, though.

Comment on lines +331 to +334
for provider_schema in schemas.provider_schemas:
resource_schemas = schemas.provider_schemas[provider_schema].resource_schemas
for resource_schema_name, resource_schema in resource_schemas.items():
if resource_schema_name == resource.type:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm looking at this and realizing there's a bug in the existing code that is replicated here. The logic in this function and in is_attribute_sensitive_in_providers_schema() don't consider which provider a resource comes from. For example, provider_a.attribute_foo would also match provider_b.attribute_foo if both providers are used in the same config. Whichever provider schema showed up first would win. This should generally be pretty rare, since most providers namespace their resource names already, but it's possible.

I'm not sure there's a need to loop through provider schemas anyways, as the resource already has the provider schema name. The same goes for resource schemas. You should be able to just do:

resource_schemas = schemas.provider_schemas[resource.provider_name].resource_schemas
resource_schema = resource_schemas[resource.type]

@@ -240,6 +240,7 @@ def workspace_list(self) -> TerraformWorkspaceContext:
continue
elif stripped_item.startswith("* "):
current_workspace = stripped_item.replace("* ", "")
all_workspaces.append(current_workspace)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. It would be nice to at least have a unit test for this.

@dorkamotorka
Copy link
Author

ATM I don't have the capacity to move this forward, feel free to take it over. I'll probably find some more time at the end of this year to finish this but until then..

@justinc1
Copy link
Contributor

I wanted to continue this PR, but looking at https://github.com/ansible-collections/cloud.terraform/blob/main/plugins/module_utils/models.py#L226 (main at 1790eb3) I see

        for block_name, block_value in json.get("block", {}).get("block_types", {}).items():

Looks like block_types is now parsed. I was also not able to crash collection using resource "lxd_profile" "profile" { ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants