-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cdk-cli deploy: fail: docker push to ecr unexpected status from PUT request 400 Bad Request #33264
Comments
Was it working prior to 2.177.0 but failing in 2.177.0 ? 400 bad request generally indicates bad requests and it could be many different causes. Are you able to simply push a local image to your ECR using your current AWS identity from local? What's happening in CDK is essentially using $CDK_DOCKER or Let me know if you are able to manually push an random image to ecr? |
Hi @pahud thanks for the response, I reverted to version 2.173.1 and it still fails with 400 bad request on ecr push. I was able to docker login with I then tried to cdk deploy and it failed again with 400 Bad Request. What I did notice was this. When i check my docker AWS credentials with I checked the JWT in the "Secret" property of that credential JSON in jwt.io. when i manually login with ecr, the jwt was valid in jwt.io. But the JWT from "Secret" that gets set when I run cdk deploy comes back as an invalid JWT in jwt.io as it ends in "==". Not sure if this maybe causing issues. |
I ended up deleting docker desktop and installing docker and colima and it's working fine with colima with both buildkit enabled and disabled. |
+1 to this with Docker Desktop on MacOS 15.2, CDK 2.177.0 (build b396961). Same error comes up with |
Thank you for the report. @james-g-stream and @wbeardall. While we'll investigate this issue, can you help us check is it only happening in 2.177.0 ? What about 2.176 or even earlier versions? |
@pahud i tried it on 2.173.1 and still happening only for docker desktop |
Possibly related to these issues, I'm looking into it: #30258 (comment) Need to turn off containerd or set |
@kaizencc yes they are related. In our cases, we get the 400 error for each image asset when cdk tries to push attestations and images, which sometimes results in 0mb images. (The behavior seems indeterministic. Sometimes attestation wins and sometimes the actual image wins. They share the same image tag and the first one is pushed but the second one throws an error because of ecr tag immutability, it seems.) Then if we retry cdk deploy, it continues to deploy with those invalid images, because cdk does not have to push any images due to cache, which finally results in ecs or lambda deployment errors. |
Hello all, I had this issue too recently and solved by following steps found in another thread related to this one: Key note here: Changing my Docker BuildX and attestation settings solved the ECR PUT 400 status code problem, but deleting the 0 byte images from my CDK's ECR repository is what got my deployments to actually work again (was seeing deploy errors for Lambda functions that did not stabilize). Sounds like the AWS CDK teams are aware of this issue and have it on their radar, currently a P2 issue sitting in backlog it sounds like. Would be great to see this bumped up to P1 and patched, but it does seem to only effect some smallish subset of people, and there is a workaround for it. Hope this helps anyone having this issue! |
Game plan is for cdklabs/cdk-assets#342 to be merged/released and a new cdk-assets version in cdk for this weeks release. The 0 byte images either need to be deleted, or your docker assets need to be updated in some way so that the hash changes. The 0 byte images are squatting on your valid hashes :(. |
Fixed this issue by disabling containerd from Settings -> General in Docker Desktop. This resolved the issue for me. @kaizencc Thank you for your help! |
There are various issues in cdk that can be traced back to attestations in docker: aws/aws-cdk#30258 aws/aws-cdk#31549 aws/aws-cdk#33264 cdk-assets cannot work with docker containerd because it will attempt to upload multiple files to the same hash in ECR, and our ECR repository is immutable (by requirement). docker recently changed their default to turn on containerd which causes this issue to skyrocket. the hotfix here is to add an environment variable when calling `docker` so that the attestation file is not added to the manifest. we can later look into adding support for including [provenance](https://docs.docker.com/build/metadata/attestations/slsa-provenance/) attestations if there is need for it. i've chosen to fix this via environment variable instead of as a command option `--provenance=false` because we must keep docker replacements in mind, and at least finch [does not](https://runfinch.com/docs/cli-reference/finch_build/) have a `provenance` property at the moment. in addition to this unit test that shows the env variable exists when `docker build` is called, i have also ensured that this solves the issue in my local setup + symlinked `cdk-assets`..
There are various issues in cdk that can be traced back to attestations in docker: aws/aws-cdk#30258 aws/aws-cdk#31549 aws/aws-cdk#33264 cdk-assets cannot work with docker containerd because it will attempt to upload multiple files to the same hash in ECR, and our ECR repository is immutable (by requirement). docker recently changed their default to turn on containerd which causes this issue to skyrocket. the hotfix here is to add an environment variable when calling `docker` so that the attestation file is not added to the manifest. we can later look into adding support for including [provenance](https://docs.docker.com/build/metadata/attestations/slsa-provenance/) attestations if there is need for it. i've chosen to fix this via environment variable instead of as a command option `--provenance=false` because we must keep docker replacements in mind, and at least finch [does not](https://runfinch.com/docs/cli-reference/finch_build/) have a `provenance` property at the moment. in addition to this unit test that shows the env variable exists when `docker build` is called, i have also ensured that this solves the issue in my local setup + symlinked `cdk-assets`.. (cherry picked from commit 8bdea13) # Conflicts: # lib/private/docker.ts # test/private/docker.test.ts
Comments on closed issues and PRs are hard for our team to see. |
1 similar comment
Comments on closed issues and PRs are hard for our team to see. |
The lastest cdk-assets is required in cdk to mitigate a ECR upload issue. It includes the following fix: cdklabs/cdk-assets#342. The following issues are related to this: aws#30258 aws#31549 aws#33264 I am keeping aws#31549 open as it is still true. this [feature request](cdklabs/cdk-assets#348) tracks the work to make cdk-assets compatible with containerd Closes aws#30258 and closes aws#33264
Describe the bug
when trying to run cdk deploy with an ecs.ContainerImage.fromAsset() I'm getting a fail: docker push to ecr unexpected status from PUT request 400 Bad Request. I've uninstalled and reinstalled docker, deleted the image asset it references from ecr and nothing seems to be working. cdk version 2.177.0 (build b396961)
Regression Issue
Last Known Working CDK Version
No response
Expected Behavior
ckd successfully push the docker image to ecr
Current Behavior
the docker push is returning 400 bad request and failing cdk deploy
Reproduction Steps
yarn cdk deploy
Possible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.177.0 (build b396961)
Framework Version
No response
Node.js Version
v20.15.0
OS
macOS 14.5
Language
TypeScript
Language Version
No response
Other information
No response
The text was updated successfully, but these errors were encountered: