Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building images for multi-arch with --load parameter fails #59

Closed
carlosedp opened this issue May 2, 2019 · 50 comments · Fixed by #65
Closed

Building images for multi-arch with --load parameter fails #59

carlosedp opened this issue May 2, 2019 · 50 comments · Fixed by #65

Comments

@carlosedp
Copy link

carlosedp commented May 2, 2019

While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the --load parameter but I got an error:

➜ docker buildx build --platform linux/arm64,linux/amd64 --load  -t carlosedp/test:v1  .
[+] Building 1.3s (24/24) FINISHED
 => [internal] load .dockerignore                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                     0.0s
 => => transferring dockerfile: 115B                                                                                                                                                     0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             0.8s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.0s
 => [linux/arm64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.2s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             1.2s
 => [linux/amd64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/amd64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => [internal] load build context                                                                                                                                                        0.0s
 => => transferring context: 232B                                                                                                                                                        0.0s
 => CACHED [linux/amd64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/amd64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/amd64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/amd64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/amd64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/amd64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => [linux/arm64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/arm64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => CACHED [linux/arm64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/arm64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/arm64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/arm64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/arm64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/arm64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => ERROR exporting to oci image format                                                                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists

I understand that the daemon can't see the manifest lists but I believe there should be a way to tag the images with some variable, like:

docker buildx build --platform linux/arm64,linux/amd64 --load -t carlosedp/test:v1-$ARCH .

To have both images loaded into the daemon and ignoring the manifest list in this case.

@tonistiigi
Copy link
Member

The limitation is temporary as daemon should get support for loading multi-arch with moby/moby#38738 so I'm a bit hesitant to add a custom implementation for it atm.

@mcamou
Copy link

mcamou commented Oct 28, 2019

Hi, this issue is 5 months old and the linked issue (moby/moby#38738) is still in Draft. Any news?

@EdoFede
Copy link

EdoFede commented Jan 19, 2020

Hi,
I've the same issue as the original message, trying to build multi-arch image and loading to local docker instance for testing purposes.

I want to build and execute some local tests before pushing the image to the repository.
Previously for this purpose, I had used standard build command (tagging the architecture) with qemu.

Build runs fine, but at the end...

 => ERROR exporting to oci image format                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists
Build failed
make: *** [build] Error 3

Here's my environment:

docker version

Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:22:34 2019
 OS/Arch:           darwin/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:29:19 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

docker buildx version

github.com/docker/buildx v0.3.1-tp-docker 6db68d029599c6710a32aa7adcba8e5a344795a7

@git-developer
Copy link

[...] trying to build multi-arch image and loading to local docker instance for testing purposes.

Depending on the use case it may be sufficient when tests run on the runner's platform only. If so, this issue can be avoided by omitting the platform parameter on load. Example:

  • Build: docker buildx build --platform linux/arm64,linux/amd64 -t foo/bar:latest .
  • Test: docker buildx build --load -t foo/bar:latest .

@tonistiigi
Copy link
Member

@git-developer The current limitation is combination of --load and multiple values to --platform. Eg. --platform linux/arm64 --load works fine.

@Zhang21
Copy link

Zhang21 commented Jul 16, 2020

you should use --push on multi-platform , use --load for single platform。

@Filius-Patris
Copy link

The pipeline in the project I'm working on runs tests before pushing. Do I have to build the image once for the test and then a second time for production?

Is there any workaround to run a multiarch container before pushing it to a repo?

@tonistiigi
Copy link
Member

@Filius-Patris You're options are:

  • run tests are part of the build
  • build single arch image, test with docker run, then build multi-arch image. Cache from the first build will be reused.
  • use a local registry

kleimkuhler pushed a commit to linkerd/linkerd2 that referenced this issue Aug 5, 2020
Build ARM docker images in the release workflow.

# Changes:
- Add a new env key `DOCKER_MULTIARCH` and `DOCKER_PUSH`. When set, it will build multi-arch images and push them to the registry. See docker/buildx#59 for why it must be pushed to the registry.
- Usage of `crazy-max/ghaction-docker-buildx ` is necessary as it already configured with the ability to perform cross-compilation (using QEMU) so we can just use it, instead of manually set up it.
- Usage of `buildx` now make default global arguments. (See: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)

# Follow-up:
- Releasing the CLI binary file in ARM architecture. The docker images resulting from these changes already build in the ARM arch. Still, we need to make another adjustment like how to retrieve those binaries and to name it correctly as part of Github Release artifacts.

Signed-off-by: Ali Ariff <[email protected]>
@alexellis
Copy link

alexellis commented Oct 23, 2020

I also landed here and I'm not sure what the options are, on a practical level it seems like a hack to support "docker build" commands and docker buildx to get around the issue.

@thesix
Copy link

thesix commented Nov 27, 2020

How would I tag an image with a version and latest?

@robtaylor
Copy link

I'm hitting this as well.. any updates?

mergify bot added a commit to spinnaker/orca that referenced this issue Jun 11, 2024
…ocker image (#4721) (#4739)

* feat(docker): add HEALTHCHECK

to facilitate testing container startup

* feat(build): add orca-integration module to exercise the just-built docker image

* feat(gha): run integration test in pr builds

multi-arch with --load doesn't work, so add a separate step using the local platform to
make an image available for testing.

see docker/buildx#59

* feat(gha): run integration test in branch builds

(cherry picked from commit b360ad5)

Co-authored-by: David Byron <[email protected]>
@winterrobert
Copy link

I just want to have the ability to load AND build multi-arch in our pipelines:

- name: Build the image
  uses: docker/build-push-action@v5
  with:
    context: .
    platforms: linux/amd64, linux/arm64
    load: true

It's doable in 2 steps (build single arch image, test, then build multi-arch image), it just feels like a waste.

aman-agrawal pushed a commit to aman-agrawal/clouddriver that referenced this issue Jul 5, 2024
…ocker imageTest docker image (spinnaker#6206) (spinnaker#6227)

* fix(web): replace deprecated spring.profiles in configuration

with spring.config.activate.on-profile to remove these warnings:

2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 375:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 363:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 350:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 312:13]

See https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-Config-Data-Migration-Guide#profile-specific-documents.

* feat(docker): add HEALTHCHECK

to facilitate testing container startup

* feat(build): add clouddriver-integration module to exercise the just-built docker image

* feat(gha): run integration test in pr builds

multi-arch with --load doesn't work, so add a separate step using the local platform to
make an image available for testing.

see docker/buildx#59

* feat(gha): run integration test in branch builds

* fix(docker): reduce the chance for false positives in the health check

In case the health check contains more detailed information where one check could report UP but the overall status is down/out of service/etc.

See https://docs.spring.io/spring-boot/docs/2.6.15/reference/html/actuator.html#actuator.endpoints.health for more.

(cherry picked from commit 9ea2224)

Co-authored-by: David Byron <[email protected]>
aman-agrawal pushed a commit to OpsMx/clouddriver-oes that referenced this issue Jul 8, 2024
…ocker imageTest docker image (spinnaker#6206) (spinnaker#6227)

* fix(web): replace deprecated spring.profiles in configuration

with spring.config.activate.on-profile to remove these warnings:

2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 375:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 363:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 350:13]
2024-05-01 21:29:23.746  WARN 1 --- [           main] o.s.b.c.config.ConfigDataEnvironment     : Property 'spring.profiles' imported from location 'class path resource [clouddriver.yml]' is invalid and should be replaced with 'spring.config.activate.on-profile' [origin: class path resource [clouddriver.yml] - 312:13]

See https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-Config-Data-Migration-Guide#profile-specific-documents.

* feat(docker): add HEALTHCHECK

to facilitate testing container startup

* feat(build): add clouddriver-integration module to exercise the just-built docker image

* feat(gha): run integration test in pr builds

multi-arch with --load doesn't work, so add a separate step using the local platform to
make an image available for testing.

see docker/buildx#59

* feat(gha): run integration test in branch builds

* fix(docker): reduce the chance for false positives in the health check

In case the health check contains more detailed information where one check could report UP but the overall status is down/out of service/etc.

See https://docs.spring.io/spring-boot/docs/2.6.15/reference/html/actuator.html#actuator.endpoints.health for more.

(cherry picked from commit 9ea2224)

Co-authored-by: David Byron <[email protected]>
ryanpeach added a commit to ryanpeach/shell that referenced this issue Aug 7, 2024
jsuereth added a commit to jsuereth/weaver that referenced this issue Aug 28, 2024
lquerel added a commit to open-telemetry/weaver that referenced this issue Sep 9, 2024
* Fix #245 - Build ARM64 image.

* Attempt to work around: docker/buildx#59

* Only build arm64 on release.

---------

Co-authored-by: Laurent Quérel <[email protected]>
riftEmber added a commit to riftEmber/chapel that referenced this issue Oct 7, 2024
Due to Docker limitation docker/buildx#59

Signed-off-by: Anna Rift <[email protected]>
riftEmber added a commit to chapel-lang/chapel that referenced this issue Oct 10, 2024
Refactor the Docker nightly testing script to optionally also push
release-tagged images.

Configured by the `RELEASE_VERSION` environment variable, assumed to get
set via Jenkins parameter. If set, the script pushes the images tagged
as `latest` and `$RELEASE_VERSION`. All nightly build/pushes still run
before release-tagged pushes, so we don't push a release if any image
variants are broken. Script aborts without building anything if
`RELEASE_VERSION` is set but we're not on the appropriate release
branch.

As part of this PR I attempted to change our process to test the image
before pushing, rather than build, push, then test. However, on
`chapelmac-m1` this ran into docker/buildx#59,
so I reverted it. Since we only push release images after all nightly
images, we're still safe from pushing a broken release-tagged image.
Noted this limitation and the requirement to build nightly images first
in a comment.

Also includes:
- delete unused `util/cron/publish-docker-images.bash` which didn't
contain as much functionality as `test-docker.bash`
- more comments and some small refactors for clarity

Resolves Cray/chapel-private#6743.

[reviewed by @tzinsky , thanks!]

Associated pre-merge tasks:
- [x] merge corresponding CI config adjustments PR
https://github.hpe.com/hpe/hpc-chapel-ci-config/pull/1291
- [x] disable unused previous job
https://chapel-ci.us.cray.com/job/publish-docker-images/
- [x] update Docker release best-practices to tell you to use this
script
(https://github.hpe.com/hpe/hpc-chapel-docs/commit/d159f7e8a0be122660cd8af39e3b49ddb8b59486)
- [x] delete temporarily private Docker repositories created for testing
(`chapel-test{,-gasnet,-gasnet-smp}`)

Testing:
- [x] manual run in non-release mode still works
- [x] manual run in release mode (temporarily modified to push to a
scratch repo)
@ro0NL
Copy link

ro0NL commented Dec 24, 2024

I just want to have the ability to load AND build multi-arch in our pipelines

@ro0NL
Copy link

ro0NL commented Dec 24, 2024

the fact build AND push is a single action in GHA is insane btw

@BYK
Copy link

BYK commented Jan 23, 2025

Just came here as I was using the GitHub Action. May I propose adding a more helpful error message pointing to some docs or even a specific, chosen comment from this thread to help people solve instead of Googleing and skimming through ~300 comments and also adding more spam to this already busy issue?

@thaJeztah
Copy link
Member

So, this scenario is supported now if the docker engine is configured to use the containerd image store;

@crazy-max @thompson-shaun wondering if we also should provide information about that (and/orr add a link to the documentation?)

❌ Without the containerd image store enabled, it still produces an error;

docker buildx create --use
competent_tesla

echo -e 'FROM alpine\nRUN echo foo > world\n' | docker buildx build --platform linux/arm64,linux/amd64 --load -t myimage -
[+] Building 6.3s (1/1) FINISHED                                                                           docker-container:gallant_mendel
 => [internal] booting buildkit                                                                                                       6.3s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                    5.9s
 => => creating container buildx_buildkit_gallant_mendel0                                                                             0.4s
ERROR: docker exporter does not currently support exporting manifest lists

✅ With the containerd image store enabled, the docker engine is able to store multi-platform images, and as such can load them when you're using a separate container builder.

docker buildx create --use
competent_tesla

echo -e 'FROM alpine\nRUN echo foo > world\n' | docker buildx build --platform linux/arm64,linux/amd64 --load -t myimage -
[+] Building 17.3s (12/12) FINISHED                                                                               docker-container:competent_tesla
 => [internal] booting buildkit                                                                                                               8.6s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                            8.1s
 => => creating container buildx_buildkit_competent_tesla0                                                                                    0.5s
 => [internal] load build definition from Dockerfile                                                                                          0.0s
 => => transferring dockerfile: 71B                                                                                                           0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                  7.8s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                  7.5s
 => [auth] library/alpine:pull token for registry-1.docker.io                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                             0.0s
 => => transferring context: 2B                                                                                                               0.0s
 => [linux/arm64 1/2] FROM docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099            0.3s
 => => resolve docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099                        0.0s
 => => sha256:52f827f723504aa3325bb5a54247f0dc4b92bb72569525bc951532c4ef679bd4 3.99MB / 3.99MB                                                0.2s
 => => extracting sha256:52f827f723504aa3325bb5a54247f0dc4b92bb72569525bc951532c4ef679bd4                                                     0.0s
 => [linux/amd64 1/2] FROM docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099            0.5s
 => => resolve docker.io/library/alpine:latest@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099                        0.0s
 => => sha256:1f3e46996e2966e4faa5846e56e76e3748b7315e2ded61476c24403d592134f0 3.64MB / 3.64MB                                                0.4s
 => => extracting sha256:1f3e46996e2966e4faa5846e56e76e3748b7315e2ded61476c24403d592134f0                                                     0.1s
 => [linux/arm64 2/2] RUN echo foo > world                                                                                                    0.1s
 => [linux/amd64 2/2] RUN echo foo > world                                                                                                    0.1s
 => exporting to oci image format                                                                                                             0.2s
 => => exporting layers                                                                                                                       0.0s
 => => exporting manifest sha256:4d0f80eb6bfbad75574a8f065b466b0e5c889958d319483299a633d6a3f38cef                                             0.0s
 => => exporting config sha256:a5234e3bad6fffa52363de0056874cf063592db0c3a233633392f4c72973ff6a                                               0.0s
 => => exporting attestation manifest sha256:96dd7b7a7ef12fca0f6e90b831423ee15869a80f3964c0037401f6f488d12033                                 0.0s
 => => exporting manifest sha256:c10ae10bc386d455c0a1927f084dbefc443189cc08ece140c60c7f200651da95                                             0.0s
 => => exporting config sha256:83e4aea73a3c6887b176659da35b8ae71f1b0cec1a4c8fa4ec65c546942f445f                                               0.0s
 => => exporting attestation manifest sha256:130a70b98fdac58cbeedfbf16542896ccaaa34e3f057fc83fa36dd8fe82b578a                                 0.0s
 => => exporting manifest list sha256:b9e0021de97c0783457abc25ad293ade6e50dff72b6ddc1ce1f56c86966dc199                                        0.0s
 => => sending tarball                                                                                                                        0.2s
 => importing to docker                                                                                                                       0.0s


docker image ls --tree myimage
IMAGE                ID             DISK USAGE   CONTENT SIZE   IN USE
myimage:latest       b9e0021de97c       16.5MB         7.64MB
├─ linux/arm64       4d0f80eb6bfb       12.8MB         3.99MB
└─ linux/amd64       c10ae10bc386       3.64MB         3.64MB

💡💡💡 But also worth mentioning that with with the containerd image store enabled, the default BuildKit builder included in the Docker Engine itself can build multi-arch images (assuming QEMU binfmt userland emulation is installed), so in that case, docker buildx create is no longer needed, and the default will work;

docker buildx ls
NAME/NODE           DRIVER/ENDPOINT     STATUS    BUILDKIT          PLATFORMS
default*            docker
 \_ default          \_ default         running   v0.18.2           linux/amd64 (+2), linux/arm64, linux/ppc64le, linux/s390x, (2 more)

✅ With the above, docker build (or docker buildx build) will do multi-platform builds, and the --load option is not needed when using the default builder;

echo -e 'FROM alpine\nRUN echo foo > world\n' | docker build --platform linux/arm64,linux/amd64 -t myimage -
[+] Building 6.6s (10/10) FINISHED                                                                                          docker:default
 => [internal] load build definition from Dockerfile                                                                                  0.0s
 => => transferring dockerfile: 71B                                                                                                   0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                          5.6s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                          0.0s
 => [internal] load .dockerignore                                                                                                     0.0s
 => => transferring context: 2B                                                                                                       0.0s
 => [auth] library/alpine:pull token for registry-1.docker.io                                                                         0.0s
 => [linux/amd64 1/2] FROM docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45    0.6s
 => => resolve docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45                0.3s
 => => sha256:38a8310d387e375e0ec6fabe047a9149e8eb214073db9f461fee6251fd936a75 3.64MB / 3.64MB                                        0.2s
 => => extracting sha256:38a8310d387e375e0ec6fabe047a9149e8eb214073db9f461fee6251fd936a75                                             0.1s
 => [linux/arm64 1/2] FROM docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45    0.2s
 => => resolve docker.io/library/alpine:latest@sha256:21dc6063fd678b478f57c0e13f47560d0ea4eeba26dfc947b2a4f81f686b9f45                0.2s
 => [linux/arm64 2/2] RUN echo foo > world                                                                                            0.2s
 => [linux/amd64 2/2] RUN echo foo > world                                                                                            0.1s
 => exporting to image                                                                                                                0.1s
 => => exporting layers                                                                                                               0.0s
 => => exporting manifest sha256:83c75be9de6292c8784a7348083d10ab208b56defa9fa63f1c4dfa3f4ce2b22b                                     0.0s
 => => exporting config sha256:c50e646a64fbf8c7f67172932ee1e7e0ca2acd721622034b018fa312f3149b7b                                       0.0s
 => => exporting attestation manifest sha256:e0414ff2938bceca5f756735736a6fa242dc6429562a501fce51cb6659bbabc3                         0.0s
 => => exporting manifest sha256:2bf9e92542e1cb89294516616c537df85dbe6e09cda4660996ceb5bb5ad2c8d3                                     0.0s
 => => exporting config sha256:61996b4d135d8fa98d962ae715288b87382442409b799c1c08b9241aed88259b                                       0.0s
 => => exporting attestation manifest sha256:18a70898ca31e74138870d4296b3910ccd0d499857635a56d0b6020c07876c0a                         0.0s
 => => exporting manifest list sha256:dcf27d0b7b0811056692c3baed43bffe5c43bec262bcd10a510bec8f8165a315                                0.0s
 => => naming to docker.io/library/myimage:latest                                                                                     0.0s
 => => unpacking to docker.io/library/myimage:latest                                                                                  0.0s

docker image ls --tree myimage
IMAGE                ID             DISK USAGE   CONTENT SIZE   IN USE
myimage:latest       dcf27d0b7b08       16.5MB         7.64MB
├─ linux/arm64       83c75be9de62       12.8MB         3.99MB
└─ linux/amd64       2bf9e92542e1       3.65MB         3.65MB

@thompson-shaun
Copy link
Collaborator

Good to drop in a note so readers are aware -- great call @thaJeztah

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.