Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker build multiple "--cache-from" docs clarification needed #8531

Closed
j3bb9z opened this issue Mar 26, 2019 · 9 comments
Closed

docker build multiple "--cache-from" docs clarification needed #8531

j3bb9z opened this issue Mar 26, 2019 · 9 comments
Labels
area/engine Issue affects Docker engine/daemon lifecycle/locked lifecycle/stale

Comments

@j3bb9z
Copy link

j3bb9z commented Mar 26, 2019

Problem description

I think we need some clarification on --cache-from argument in docker image build docs.

How to pass multiple images?
docker build --cache-from first-image --cache-from second-image
or
docker build --cache-from first-image second-image

How does providing multiple cache-from images work? Does it take first image that is found (argument order matters)? Or does it pick the one with most cached layers?

I'm trying to use it in CI, with multi-stage build. Is this the right way to do it?

Let's assume I have a Dockerfile with build step (FROM ... as builder) and setup step (FROM ...-alpine).

Also, let's assume we use git-flow and we want to use develop as base cache image, not last built tag (latest).

first step:

docker build \
    --target builder \
    --cache-from some-image-builder:develop \
    --cache-from some-image-builder:$GIT_REF

second step:

docker build \
    --cache-from some-image-builder:develop \
    --cache-from some-image-builder:$GIT_REF \
    --cache-from some-image:develop \
    --cache-from some-image:$GIT_REF

So, building image on feature branch, last branch build image should be used as cache and (if it's a new branch) it should fallback to develop right?

Or am I mistaken? 🤔

Problem location

https://docs.docker.com/engine/reference/commandline/build/

Suggestions for a fix

I think these issues should be clarified in docs (docker build cli usage).

Any response here also would be greatly appreciated!

@koreno
Copy link

koreno commented Sep 21, 2019

And what if we don't pass --cache-from? it seems docker is sometimes smart enough to find cached layers by itself - when does that not work?

@j3bb9z
Copy link
Author

j3bb9z commented Oct 2, 2019

And what if we don't pass --cache-from? it seems docker is sometimes smart enough to find cached layers by itself - when does that not work?

I think it won't work if you have multistage builds, with final image stripped down. We build images for intermediate stages using --target and --cache-from. Also, we build images for all branches. Will docker know that building feature/xyz, it should use image from develop as cache?

@traci-morrison traci-morrison added the area/engine Issue affects Docker engine/daemon label Dec 10, 2019
@deepankar-j
Copy link

Has anyone figure out how this works? I've seen some posts about using comma separated values for the cache-from param.

For example:

docker build \
    --target builder \
    --cache-from some-image-builder:develop,some-image-builder:$GIT_REF

I've tried it out and it seems to work on one build runner (GitLab), but on another build runner (separate machine), it doesn't use the cache as I'd expect. Note that do run a docker pull for both reference images before I run docker build.

It would be great to have more documentation about the cache-from param.

@j3bb9z
Copy link
Author

j3bb9z commented Feb 6, 2020

@deepankar-j we've done it like this

# build image with development dependencies
docker build \
    --target builder \
    --cache-from $BUILDER_IMAGE \
    --cache-from $BUILDER_IMAGE_BASE \
    -t $BUILDER_IMAGE \
    .

# push for reuse in other builds
docker push $BUILDER_IMAGE

# build downsized production image
docker build \
    --cache-from $BUILDER_IMAGE \
    --cache-from $BUILDER_IMAGE_BASE \
    --cache-from $PROD_IMAGE \
    --cache-from $PROD_IMAGE_BASE \
    -t $PROD_IMAGE \
    .

docker push $PROD_IMAGE

BUILDER_ images contain development dependencies
PROD_ images contain only build result and dependencies needed to run it in production
images without _BASE are built from current branch (they exist in subsequent pushes to the branch)
_BASE images are built from develop branch, they serve as cache for first build of a branch

Seems to work.

@deepankar-j
Copy link

Thanks @jacob87o2!

After I posted my comment, I found the following comment on an issue, which indicates that a comma-separate value can be supplied to --cache-from. I tried that approach and it seems to work too.

moby/moby#34715 (comment)

As per that comment, the order in which the arguments are provided seems to be important for cache matching.

Nonetheless, thank you for responding. Hopefully, this will help others in the future.

@williamareynolds
Copy link

I think it's important to know the order that these caches are checked in, or to have some way to control the order. I currently have tags like 45 and then latest. I want to use the cache for someimage:45 before I use someimage:latest if it exists, but it seems to use latest no matter what and this is resulting in poor caching.

@docker-robott
Copy link
Collaborator

There hasn't been any activity on this issue for a long time.
If the problem is still relevant, mark the issue as fresh with a /remove-lifecycle stale comment.
If not, this issue will be closed in 14 days. This helps our maintainers focus on the active issues.

Prevent issues from auto-closing with a /lifecycle frozen comment.

/lifecycle stale

@SoftwareApe
Copy link

SoftwareApe commented Jan 11, 2023

@williamareynolds When I specify multiple cache-froms, they are processed in order. The comma separated refs I can't find documented anywhere.

This is also mentioned here, but sadly missing in the docs:
moby/moby#26839 (comment)

/remove-lifecycle stale

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

/lifecycle locked

@docker docker locked and limited conversation to collaborators Feb 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/engine Issue affects Docker engine/daemon lifecycle/locked lifecycle/stale
Projects
None yet
Development

No branches or pull requests

7 participants