Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 settings on aws_dms_endpoint conflict with "extra_connection_attributes" #8009

Closed
vitorbaptista opened this issue Mar 19, 2019 · 21 comments · Fixed by #16827
Closed

S3 settings on aws_dms_endpoint conflict with "extra_connection_attributes" #8009

vitorbaptista opened this issue Mar 19, 2019 · 21 comments · Fixed by #16827
Assignees
Labels
bug Addresses a defect in current functionality.
Milestone

Comments

@vitorbaptista
Copy link
Contributor

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

v0.11.13

Affected Resource(s)

  • aws_dms_endpoint.s3_settings

Terraform Configuration Files

resource "aws_dms_endpoint" "s3_raw" {
  endpoint_id = "s3-raw"
  engine_name = "s3"
  endpoint_type = "target"
  extra_connection_attributes = "dataFormat=parquet;"
  s3_settings {
    service_access_role_arn = "${aws_iam_role.role.arn}"
    bucket_name = "${var.s3_bucket}"
    bucket_folder = "${var.raw_data_path}/dms"
    compression_type = "GZIP"
  }
}

Expected Behavior

The "extra connection status" in the DMS endpoint should be:

bucketFolder=data/raw/dms;bucketName=MY_BUCKET_NAME;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\n;dataFormat=parquet;

Actual Behavior

The "extra connection status" in the DMS endpoint is:

bucketFolder=data/raw/dms;bucketName=MY_BUCKET_NAME;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\n;

Notice the lack of dataFormat=parquet

Steps to Reproduce

  1. terraform apply
@ewbankkit
Copy link
Contributor

Maybe something to do with AWS SDK v1.18.4 released yesterday?

S3 Endpoint Settings added support for 1) Migrating to Amazon S3 as a target in Parquet format 2) Encrypting S3 objects after migration with custom KMS Server-Side encryption. Redshift Endpoint Settings added support for encrypting intermediate S3 objects during migration with custom KMS Server-Side encryption.

@vitorbaptista
Copy link
Contributor Author

Not sure if it makes any difference, but I haven't upgraded my AWS SDK, and I installed Terraform about a week ago, so I'm not running this new code.

@hari-ganagavaram
Copy link

I'm seeing this issue as well.
I'm using the below version of terraform:
Terraform v0.11.13

  • provider.aws v2.6.0

@alex2308
Copy link

Updated to v2.8.0 and its still an issue.

@ewbankkit
Copy link
Contributor

The extra connection attributes are not being written https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L283-L285 to the DMS API or read https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L566 from the DMS API when the engine type is s3, although confusingly they do get updated https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L391-L393 for all engine types.
If we consistently handle extra_connection_attributes for all engines types then the diff handling will have to take into account that the value returned by the API - bucketFolder=data/dms;bucketName=bucket_name;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\\n;dataFormat=parquet; - includes not just the value(s) specified in the Terraform code - dataFormat=parquet; - but also attributes set in the s3_settings block.

@mstykow
Copy link

mstykow commented May 11, 2019

Based on what @ewbankkit mentions, an intermediate workaround is to run an initial terraform apply and then change one of your connection attributes (use some irrelevant parameter such as maxFileSize). A second terraform apply will then update the missing connection attributes.

@aeschright aeschright added the needs-triage Waiting for first response or review from a maintainer. label Jun 24, 2019
@aeschright aeschright added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Jul 1, 2019
@rory-lamendola
Copy link

@aeschright is this being actively worked on?

@mchudoba
Copy link
Contributor

mchudoba commented Jan 3, 2020

Are there any new updates on this issue? I am seeing this in Terraform 12 as well.

Terraform: v0.12.9
provider.aws: v2.42

lyle-nel pushed a commit to lyle-nel/terraform-provider-aws that referenced this issue Jan 13, 2020
…ed API changes in aws-sdk-go. This is so that we can avoid using extra_connection_attributes.
@lyle-nel
Copy link

@rory-lamendola @mchudoba Please upvote the pull request if you want it prioritised. It is not a full solution, but it will serve most of the use-cases out there by simply extending the supported s3_settings attributes to cover attributes that would otherwise need to be defined in extra_connection_attributes.

lyle-nel pushed a commit to lyle-nel/terraform-provider-aws that referenced this issue Jan 27, 2020
…ss it has a default or is explicitly populated.
@tuannguyen0901
Copy link

anyone working on this? seem the fix has not passed QA for a while?

@rostanin
Copy link

folks anyone working on it?

@lyle-nel
Copy link

folks anyone working on it?

Right now I am allocated to a different project at my company. So I am not working on this currently.

@rostanin
Copy link

Decided to use cloud formation plugin

@tsaluszewski
Copy link

Anyone working on it? We plan to do a workaround to update just extra_connection_properties with the aws cli bash script.

@kstanaszek
Copy link

Any update on this issue? Please, fix it.

@rambej0201
Copy link

Hi everybody, on one of my data pipeline I am trying to implement time base partition for s3-target-endpoint as like we have TimeBasedPartitioner in io.confluent.connect.s3.S3SinkConnector

Do we have any related issue going on.

Thanks
Raghu

@simonB2020
Copy link

Yet another user wondering why this issue is being ignored ?
It's been 18 months since first raised, without a single indication of it even being recognised as a bug !?

@Mistawes
Copy link

Mistawes commented Sep 15, 2020

Also hitting this issue in production, it's resulting in files not being encrypted when they land in the S3 bucket, due to the encryption properties being dropped.

@breathingdust
Copy link
Member

Hi all! 👋 Just wanted to direct you to our public roadmap for this quarter (Nov-Jan) in which this item has been mentioned.

Due to the significant community interest in resolving this issue, we will be looking at merging existing contributions soon.

We appreciate all the contributions and feedback thus far.

@ghost
Copy link

ghost commented Jan 15, 2021

This has been released in version 3.24.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Feb 12, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Feb 12, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality.
Projects
None yet