Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ecs_task_definition volume root_directory does not take effect #26793

Closed
jonb-ifit opened this issue Sep 13, 2022 · 4 comments · Fixed by #26880
Closed

ecs_task_definition volume root_directory does not take effect #26793

jonb-ifit opened this issue Sep 13, 2022 · 4 comments · Fixed by #26880
Labels
bug Addresses a defect in current functionality. service/ecs Issues and PRs that pertain to the ecs service.
Milestone

Comments

@jonb-ifit
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

terraform 1.2.9

Affected Resource(s)

  • aws_ecs_task_definition

Terraform Configuration Files

      efs_volume_configuration {
        file_system_id     = var.file_system_id
        root_directory     = "/cache"
        transit_encryption = "ENABLED"
        authorization_config {
          iam = "ENABLED"
        }
      }

Debug Output

The plan output generated this as expected:

      - volume { # forces replacement
          - name = "cache" -> null

          - efs_volume_configuration {
              - file_system_id          = "fs-0ae3f67bc10ac8ed6" -> null
              - root_directory          = "/" -> null
              - transit_encryption      = "ENABLED" -> null
              - transit_encryption_port = 0 -> null

              - authorization_config {
                  - iam = "ENABLED" -> null
                }
            }
        }
      + volume { # forces replacement
          + name = "cache"

          + efs_volume_configuration {
              + file_system_id     = "fs-0ae3f67bc10ac8ed6"
              + root_directory     = "/cache"
              + transit_encryption = "ENABLED"

              + authorization_config {
                  + iam = "ENABLED"
                }
            }
        }

Expected Behavior

New task definition revision should have the correct root directory

Actual Behavior

The new task definition revision was created, but still says that the root directory is /

This means the next apply to also create a new revision with the same issue.

Steps to Reproduce

  1. create an aws_ecs_task_definition resource with a volume block with an efs_volume_configuration with a root directory other than /
  2. Run terraform apply and observe the plan output. The plan shows that root_directory is being set
  3. Look at the new task definition revision in the AWS console. The volume root directory is set to /
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/ecs Issues and PRs that pertain to the ecs service. labels Sep 13, 2022
@odie5533
Copy link
Contributor

This issue is mentioned in #19549 (and the closed ticket #18010). Does make it impossible to mount two directories to a Docker container using the same aws_efs_access_point. Also it silently ignores your configuration, which is not good. Ended up clobbering my deployment files.

@ewbankkit ewbankkit added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Oct 20, 2022
@a087674
Copy link

a087674 commented Jan 13, 2023

This issue is mentioned in #19549 (and the closed ticket #18010). Does make it impossible to mount two directories to a Docker container using the same aws_efs_access_point. Also it silently ignores your configuration, which is not good. Ended up clobbering my deployment files.

@odie5533 - You actually cannot mount multiple directories using access points as the access point itself is tied to a specific directory. See documentation. However multiple directories via a Mount Point are possible.

@github-actions github-actions bot added this to the v5.3.0 milestone Jun 12, 2023
@github-actions
Copy link

This functionality has been released in v5.3.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ecs Issues and PRs that pertain to the ecs service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants