-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Building images for multi-arch with --load parameter fails #59
Comments
The limitation is temporary as daemon should get support for loading multi-arch with moby/moby#38738 so I'm a bit hesitant to add a custom implementation for it atm. |
Hi, this issue is 5 months old and the linked issue (moby/moby#38738) is still in Draft. Any news? |
Hi, I want to build and execute some local tests before pushing the image to the repository. Build runs fine, but at the end...
Here's my environment: docker version
docker buildx version
|
Depending on the use case it may be sufficient when tests run on the runner's platform only. If so, this issue can be avoided by omitting the
|
@git-developer The current limitation is combination of |
you should use |
The pipeline in the project I'm working on runs tests before pushing. Do I have to build the image once for the test and then a second time for production? Is there any workaround to run a multiarch container before pushing it to a repo? |
@Filius-Patris You're options are:
|
Build ARM docker images in the release workflow. # Changes: - Add a new env key `DOCKER_MULTIARCH` and `DOCKER_PUSH`. When set, it will build multi-arch images and push them to the registry. See docker/buildx#59 for why it must be pushed to the registry. - Usage of `crazy-max/ghaction-docker-buildx ` is necessary as it already configured with the ability to perform cross-compilation (using QEMU) so we can just use it, instead of manually set up it. - Usage of `buildx` now make default global arguments. (See: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) # Follow-up: - Releasing the CLI binary file in ARM architecture. The docker images resulting from these changes already build in the ARM arch. Still, we need to make another adjustment like how to retrieve those binaries and to name it correctly as part of Github Release artifacts. Signed-off-by: Ali Ariff <[email protected]>
I also landed here and I'm not sure what the options are, on a practical level it seems like a hack to support "docker build" commands and docker buildx to get around the issue. |
How would I tag an image with a version and latest? |
I'm hitting this as well.. any updates? |
…ocker image (#4721) (#4739) * feat(docker): add HEALTHCHECK to facilitate testing container startup * feat(build): add orca-integration module to exercise the just-built docker image * feat(gha): run integration test in pr builds multi-arch with --load doesn't work, so add a separate step using the local platform to make an image available for testing. see docker/buildx#59 * feat(gha): run integration test in branch builds (cherry picked from commit b360ad5) Co-authored-by: David Byron <[email protected]>
I just want to have the ability to load AND build multi-arch in our pipelines:
It's doable in 2 steps (build single arch image, test, then build multi-arch image), it just feels like a waste. |
…ocker imageTest docker image (spinnaker#6206) (spinnaker#6227) * fix(web): replace deprecated spring.profiles in configuration with spring.config.activate.on-profile to remove these warnings: 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 375:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 363:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 350:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 312:13] See https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-Config-Data-Migration-Guide#profile-specific-documents. * feat(docker): add HEALTHCHECK to facilitate testing container startup * feat(build): add clouddriver-integration module to exercise the just-built docker image * feat(gha): run integration test in pr builds multi-arch with --load doesn't work, so add a separate step using the local platform to make an image available for testing. see docker/buildx#59 * feat(gha): run integration test in branch builds * fix(docker): reduce the chance for false positives in the health check In case the health check contains more detailed information where one check could report UP but the overall status is down/out of service/etc. See https://docs.spring.io/spring-boot/docs/2.6.15/reference/html/actuator.html#actuator.endpoints.health for more. (cherry picked from commit 9ea2224) Co-authored-by: David Byron <[email protected]>
…ocker imageTest docker image (spinnaker#6206) (spinnaker#6227) * fix(web): replace deprecated spring.profiles in configuration with spring.config.activate.on-profile to remove these warnings: 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 375:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 363:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 350:13] 2024-05-01 21:29:23.746 WARN 1 --- [ main] o.s.b.c.config.ConfigDataEnvironment : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 312:13] See https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-Config-Data-Migration-Guide#profile-specific-documents. * feat(docker): add HEALTHCHECK to facilitate testing container startup * feat(build): add clouddriver-integration module to exercise the just-built docker image * feat(gha): run integration test in pr builds multi-arch with --load doesn't work, so add a separate step using the local platform to make an image available for testing. see docker/buildx#59 * feat(gha): run integration test in branch builds * fix(docker): reduce the chance for false positives in the health check In case the health check contains more detailed information where one check could report UP but the overall status is down/out of service/etc. See https://docs.spring.io/spring-boot/docs/2.6.15/reference/html/actuator.html#actuator.endpoints.health for more. (cherry picked from commit 9ea2224) Co-authored-by: David Byron <[email protected]>
* Fix #245 - Build ARM64 image. * Attempt to work around: docker/buildx#59 * Only build arm64 on release. --------- Co-authored-by: Laurent Quérel <[email protected]>
Due to Docker limitation docker/buildx#59 Signed-off-by: Anna Rift <[email protected]>
Refactor the Docker nightly testing script to optionally also push release-tagged images. Configured by the `RELEASE_VERSION` environment variable, assumed to get set via Jenkins parameter. If set, the script pushes the images tagged as `latest` and `$RELEASE_VERSION`. All nightly build/pushes still run before release-tagged pushes, so we don't push a release if any image variants are broken. Script aborts without building anything if `RELEASE_VERSION` is set but we're not on the appropriate release branch. As part of this PR I attempted to change our process to test the image before pushing, rather than build, push, then test. However, on `chapelmac-m1` this ran into docker/buildx#59, so I reverted it. Since we only push release images after all nightly images, we're still safe from pushing a broken release-tagged image. Noted this limitation and the requirement to build nightly images first in a comment. Also includes: - delete unused `util/cron/publish-docker-images.bash` which didn't contain as much functionality as `test-docker.bash` - more comments and some small refactors for clarity Resolves Cray/chapel-private#6743. [reviewed by @tzinsky , thanks!] Associated pre-merge tasks: - [x] merge corresponding CI config adjustments PR https://github.hpe.com/hpe/hpc-chapel-ci-config/pull/1291 - [x] disable unused previous job https://chapel-ci.us.cray.com/job/publish-docker-images/ - [x] update Docker release best-practices to tell you to use this script (https://github.hpe.com/hpe/hpc-chapel-docs/commit/d159f7e8a0be122660cd8af39e3b49ddb8b59486) - [x] delete temporarily private Docker repositories created for testing (`chapel-test{,-gasnet,-gasnet-smp}`) Testing: - [x] manual run in non-release mode still works - [x] manual run in release mode (temporarily modified to push to a scratch repo)
I just want to have the ability to load AND build multi-arch in our pipelines |
the fact build AND push is a single action in GHA is insane btw |
Just came here as I was using the GitHub Action. May I propose adding a more helpful error message pointing to some docs or even a specific, chosen comment from this thread to help people solve instead of Googleing and skimming through ~300 comments and also adding more spam to this already busy issue? |
So, this scenario is supported now if the docker engine is configured to use the containerd image store;
@crazy-max @thompson-shaun wondering if we also should provide information about that (and/orr add a link to the documentation?) ❌ Without the containerd image store enabled, it still produces an error; docker buildx create --use
competent_tesla
echo -e 'FROM alpine\nRUN echo foo > world\n' | docker buildx build --platform linux/arm64,linux/amd64 --load -t myimage -
[+] Building 6.3s (1/1) FINISHED docker-container:gallant_mendel
=> [internal] booting buildkit 6.3s
=> => pulling image moby/buildkit:buildx-stable-1 5.9s
=> => creating container buildx_buildkit_gallant_mendel0 0.4s
ERROR: docker exporter does not currently support exporting manifest lists ✅ With the containerd image store enabled, the docker engine is able to store multi-platform images, and as such can load them when you're using a separate container builder. docker buildx create --use
competent_tesla
echo -e 'FROM alpine\nRUN echo foo > world\n' | docker buildx build --platform linux/arm64,linux/amd64 --load -t myimage -
[+] Building 17.3s (12/12) FINISHED docker-container:competent_tesla
=> [internal] booting buildkit 8.6s
=> => pulling image moby/buildkit:buildx-stable-1 8.1s
=> => creating container buildx_buildkit_competent_tesla0 0.5s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 71B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/alpine:latest 7.8s
=> [linux/arm64 internal] load metadata for docker.io/library/alpine:latest 7.5s
=> [auth] library/alpine:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/arm64 1/2] FROM docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 0.3s
=> => resolve docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 0.0s
=> => sha256:52f827f723504aa3325bb5a54247f0dc4b92bb72569525bc951532c4ef679bd4 3.99MB / 3.99MB 0.2s
=> => extracting sha256:52f827f723504aa3325bb5a54247f0dc4b92bb72569525bc951532c4ef679bd4 0.0s
=> [linux/amd64 1/2] FROM docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 0.5s
=> => resolve docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 0.0s
=> => sha256:1f3e46996e2966e4faa5846e56e76e3748b7315e2ded61476c24403d592134f0 3.64MB / 3.64MB 0.4s
=> => extracting sha256:1f3e46996e2966e4faa5846e56e76e3748b7315e2ded61476c24403d592134f0 0.1s
=> [linux/arm64 2/2] RUN echo foo > world 0.1s
=> [linux/amd64 2/2] RUN echo foo > world 0.1s
=> exporting to oci image format 0.2s
=> => exporting layers 0.0s
=> => exporting manifest sha256:4d0f80eb6bfbad75574a8f065b466b0e5c889958d319483299a633d6a3f38cef 0.0s
=> => exporting config sha256:a5234e3bad6fffa52363de0056874cf063592db0c3a233633392f4c72973ff6a 0.0s
=> => exporting attestation manifest sha256:96dd7b7a7ef12fca0f6e90b831423ee15869a80f3964c0037401f6f488d12033 0.0s
=> => exporting manifest sha256:c10ae10bc386d455c0a1927f084dbefc443189cc08ece140c60c7f200651da95 0.0s
=> => exporting config sha256:83e4aea73a3c6887b176659da35b8ae71f1b0cec1a4c8fa4ec65c546942f445f 0.0s
=> => exporting attestation manifest sha256:130a70b98fdac58cbeedfbf16542896ccaaa34e3f057fc83fa36dd8fe82b578a 0.0s
=> => exporting manifest list sha256:b9e0021de97c0783457abc25ad293ade6e50dff72b6ddc1ce1f56c86966dc199 0.0s
=> => sending tarball 0.2s
=> importing to docker 0.0s
docker image ls --tree myimage
IMAGE ID DISK USAGE CONTENT SIZE IN USE
myimage:latest b9e0021de97c 16.5MB 7.64MB
├─ linux/arm64 4d0f80eb6bfb 12.8MB 3.99MB
└─ linux/amd64 c10ae10bc386 3.64MB 3.64MB 💡💡💡 But also worth mentioning that with with the containerd image store enabled, the default BuildKit builder included in the Docker Engine itself can build multi-arch images (assuming QEMU binfmt userland emulation is installed), so in that case, docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default* docker
\_ default \_ default running v0.18.2 linux/amd64 (+2), linux/arm64, linux/ppc64le, linux/s390x, (2 more)
✅ With the above, echo -e 'FROM alpine\nRUN echo foo > world\n' | docker build --platform linux/arm64,linux/amd64 -t myimage -
[+] Building 6.6s (10/10) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 71B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/alpine:latest 5.6s
=> [linux/arm64 internal] load metadata for docker.io/library/alpine:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [auth] library/alpine:pull token for registry-1.docker.io 0.0s
=> [linux/amd64 1/2] FROM docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45 0.6s
=> => resolve docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45 0.3s
=> => sha256:38a8310d387e375e0ec6fabe047a9149e8eb214073db9f461fee6251fd936a75 3.64MB / 3.64MB 0.2s
=> => extracting sha256:38a8310d387e375e0ec6fabe047a9149e8eb214073db9f461fee6251fd936a75 0.1s
=> [linux/arm64 1/2] FROM docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45 0.2s
=> => resolve docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45 0.2s
=> [linux/arm64 2/2] RUN echo foo > world 0.2s
=> [linux/amd64 2/2] RUN echo foo > world 0.1s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => exporting manifest sha256:83c75be9de6292c8784a7348083d10ab208b56defa9fa63f1c4dfa3f4ce2b22b 0.0s
=> => exporting config sha256:c50e646a64fbf8c7f67172932ee1e7e0ca2acd721622034b018fa312f3149b7b 0.0s
=> => exporting attestation manifest sha256:e0414ff2938bceca5f756735736a6fa242dc6429562a501fce51cb6659bbabc3 0.0s
=> => exporting manifest sha256:2bf9e92542e1cb89294516616c537df85dbe6e09cda4660996ceb5bb5ad2c8d3 0.0s
=> => exporting config sha256:61996b4d135d8fa98d962ae715288b87382442409b799c1c08b9241aed88259b 0.0s
=> => exporting attestation manifest sha256:18a70898ca31e74138870d4296b3910ccd0d499857635a56d0b6020c07876c0a 0.0s
=> => exporting manifest list sha256:dcf27d0b7b0811056692c3baed43bffe5c43bec262bcd10a510bec8f8165a315 0.0s
=> => naming to docker.io/library/myimage:latest 0.0s
=> => unpacking to docker.io/library/myimage:latest 0.0s
docker image ls --tree myimage
IMAGE ID DISK USAGE CONTENT SIZE IN USE
myimage:latest dcf27d0b7b08 16.5MB 7.64MB
├─ linux/arm64 83c75be9de62 12.8MB 3.99MB
└─ linux/amd64 2bf9e92542e1 3.65MB 3.65MB |
Good to drop in a note so readers are aware -- great call @thaJeztah |
While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the
--load
parameter but I got an error:I understand that the daemon can't see the manifest lists but I believe there should be a way to tag the images with some variable, like:
docker buildx build --platform linux/arm64,linux/amd64 --load -t carlosedp/test:v1-$ARCH .
To have both images loaded into the daemon and ignoring the manifest list in this case.
The text was updated successfully, but these errors were encountered: