CCMSG 1074 - Allow S3 sink to use assume role with aws.access.key.id #552
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
Allow S3 sink to use assume role with aws.access.key.id
Solution
when the connector uses aws.access.key.id it defaults to the BasicAWSCredential ignoring the assume role configs.
As part of solution, I hve introduced a check to find out which Provider has been configured by customer and instantiate appropriate provider.
Also, customer doesn't have to keep creds and roles info in .aws/credentials file now for Assumed role because
we are creating StsClient with configured aws.access.key.id and aws.secret.access.key. If aws.access.key.id and aws.secret.access.key are not configured in connector, default stsclient will look for creds in aws/credential file.
Does this solution apply anywhere else?
If yes, where?
Test Strategy
Testing done:
Steps to test Assume Role :
1- AWS account-1 (pooja-aws-devel)
2- Log into to the AWS management web console for the DEVEL account.
3- Create a test bucket ex.confluent-test-2
4- Create a policy for the bucket in Homepage -> IAM -> Policies and Save policy. Ex. read-write-pooja-bucket
5- Create a role for the bucket. Roles -> Another AWS account -> Enter the other AWS Account ID: 596201386539(pooja-aws) . Use the previously created policy, ex. read-write-pooja-bucket. Save the role. ex. UpdatePoojaBucket and a new role will get created with
arn:aws:iam::596404860876:role/UpdatePoojaBucket
6-Login to another account (pooja-aws) Accont Id : 596404860876
7- Create a resource group under staging [Pooja-CLFT]. Homepage -> IAM -> Groups.
8-After creating, specify a custom policy. Permissions tab -> Inline Policies -> Create Group Policy. Policy name, ex: allow-assume-S3-role-in-pooja-devel . Use the pooja-aws-devel account id.
Inline Policy :
{ “Version”: “2012-10-17”, “Statement”: { “Effect”: “Allow”, "Action": "sts:AssumeRole", “Resource”: “arn:aws:iam:: 037803949979:role/UpdateDanielBucket” } }
9-Add a test user to the pooja-aws account and add the user to the Pooja-CLFT.
10- Use the generated access_key and secret_access in connector config
Create a S3sink connector using following configs, it will push the data from data_4 topic to confluent-test-2 bucket:
{ "name": "S3SinkConnectorConnector_0", "config": { "s3.credentials.provider.sts.role.arn": "arn:aws:iam::596404860876:role/UpdatePoojaBucket", "s3.credentials.provider.sts.role.session.name": "session", "key.converter.schemas.enable": "false", "s3.credentials.provider.sts.role.external.id": "5544", "value.converter.schemas.enable": "false", "schemas.enable": "false", "name": "S3SinkConnectorConnector_0", "connector.class": "io.confluent.connect.s3.S3SinkConnector", "tasks.max": "1", "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "topics": "data_4", "format.class": "io.confluent.connect.s3.format.json.JsonFormat", "flush.size": "1", "s3.bucket.name": "confluent-test-2", "s3.region": "us-east-2", "s3.credentials.provider.class": " io.confluent.connect.s3.auth.AwsAssumeRoleCredentialsProvider", "aws.access.key.id": "AKIAYVUC7GIV4OIWR564", "aws.secret.access.key": "****************************************", "storage.class": "io.confluent.connect.s3.storage.S3Storage" } }
Release Plan