Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drone doesn't seem to cache layers #34

Closed
emilebosch opened this issue Jan 29, 2016 · 27 comments
Closed

Drone doesn't seem to cache layers #34

emilebosch opened this issue Jan 29, 2016 · 27 comments

Comments

@emilebosch
Copy link

I'm using this setup but i still get no cache performance increase. I do however see that is loading and saving however it doesn't seem to use the cache layers. Any idea? A normal build does use the cache layer.

Is there a specific reason why we just don't mount the docker socket in the container and use the existing docker for building?

build:
  image: alpine:3.2

publish:
  docker:
    registry: XX
    username: XX
    password: $$REGISTRY_PASSWORD
    email: [email protected]
    repo: XX
    load: docker/image.tar
    save:
      destination: docker/image.tar
      tag: latest

notify:
  slack:
    webhook_url: XX
    channel: update
    username: drone

cache:
  mount:
    - docker/image.tar
@bradrydzewski
Copy link
Member

I can't speak to the cache issue since this isn't something I've used personally, but perhaps @janeczku can comment since he was the one that introduced the feature.

Is there a specific reason why we just don't mount the docker socket in the container and use the existing docker for building?

There are a few good reasons why we don't do this, but the most compelling is security. Mounting the host machine's Docker socket into the container would give your build environment root access to the host machine. This would make public facing Drone instances (such as the drone instance used to build this repository) vulnerable to a wide range of attacks.

@janeczku
Copy link

@emilebosch Could you run your build with Docker debug output? See https://github.com/drone-plugins/drone-docker/blob/master/DOCS.md#troubleshooting

@emilebosch
Copy link
Author

@janeczku I've heard this security issue before. Although i think it doesn't really is too restricting. I understand the need, but i think allowing a configuration option in drone itself to "allow-unsafe-plugins" would mitigate this. Now, drone decides from themselves what is considered "good" and "safe" which is ok practice. But severely limits my options. I would love to have direct access to the cluster it runs on so i can also schedule containers do blue green deployments etc. Since we run all the drone behind firewall it really would be nice for us.

@bradrydzewski
Copy link
Member

There is no reason you have to use the docker plugin. You can mount the host machine's Docker socket into your build container using volumes [1] and then run docker build and docker push from the build section. The plugin is all about safety, but drone won't stop you from doing almost exactly what you've described

[1] http://readme.drone.io/usage/build_test/#volumes:fb92aa3346185c57f15afda861d465a3

@emilebosch
Copy link
Author

@bradrydzewski Awesome. Is there a Faq or docs i maybe can update with a PR? Cause i've tried to get this going but i didn't have enough actual leads. Thanks so much!

@bradrydzewski
Copy link
Member

you can share a how-to here: https://discuss.drone.io/c/how-tos

@gtaylor
Copy link

gtaylor commented Jan 31, 2016

@emilebosch If you end up posting to How-To's, we can Tweet your article with the Drone Twitter account and @-mention you if you've got an account. We'd probably end up linking to such an article in the docs eventually, too.

@emilebosch
Copy link
Author

Ok so this is what i needed to get this puppy running:

build:
  image: alpine:3.2
  commands:
    - apk --update add docker
    - "export DOCKER_HOST=unix:///var/run/host.sock"
    - docker login -u registry -p $$REGISTRY_PASSWORD -e [email protected] registry.XX.com
    - docker build -t registry.XX.com/ok .
    - docker push registry.XX.com/ok
  volumes:
    - /var/run/docker.sock:/var/run/host.sock:ro

Not very hard as you can see :)

@emilebosch
Copy link
Author

Also thanks @bradrydzewski for making Drone its pretty sick!

@webwurst
Copy link

webwurst commented Feb 4, 2016

@emilebosch Did this really work for you? It seems to me that global secrects are not interpolated during build commands. Not even if trusted/privileged enabled.

@emilebosch
Copy link
Author

@webwurst Nope. But i started doing my own magic as described above and that worked!

@webwurst
Copy link

webwurst commented Feb 4, 2016

Ok, got it! Thanks for the clarification.

@chuyskywalker
Copy link

I also hit this same issue where the docker daemon (in the build container) simply refuses to use the imported cached layers. I also went the route through host socket mounting. I totally understand that, for publicly hosted instance (drone.io, etc) this is a non-starter for a lot of reasons, but for privately run drones this feels a lot saner too me. Better cached usage, less "weird" of a setup. It'd be much appreciate if the docker plugin could support this, but the workaround above leads down a working path.

Don't forget to mark the repo as "trusted" in drone so volume mounts are allowed. Otherwise, I ended up doing this:

build:
  image: centos:7
  environment:
    - DOCKER_HOST=unix:///tmp/host.sock
    - IMAGEID=USERNAME/REPONAME:$$BUILD_NUMBER-$$COMMIT
    - DOCKERHUB_USER=$$DOCKERHUB_USER
    - DOCKERHUB_PASS=$$DOCKERHUB_PASS
    - DOCKERHUB_EMAIL=$$DOCKERHUB_EMAIL
  volumes:
    - /var/run/docker.sock:/tmp/host.sock:rw
  commands:
    - |
      cat <<'EOF' > /etc/yum.repos.d/docker.repo
      [dockerrepo]
      name=Docker Repository
      baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
      enabled=1
      gpgcheck=1
      gpgkey=https://yum.dockerproject.org/gpg
      EOF
    - yum install -y docker-engine
    - docker build -t "$IMAGEID" .
    - docker login -u "$DOCKERHUB_USER" -p "$DOCKERHUB_PASS" -e "$DOCKERHUB_EMAIL"
    - docker push "$IMAGEID"

So that user/pass/email don't leak through the build log.

@bradrydzewski
Copy link
Member

@chuyskywalker there is an open PR to improve this at #36

I totally understand that, for publicly hosted instance (drone.io, etc) this is a non-starter for a lot of reasons, but for privately run drones this feels a lot saner too me.

The current solution works great for certain languages, such as Go, Rust and others that compile binaries and don't need layer caching. Drone is optimized for these language because I spend most of my time writing Go code, and that bias is reflected in the initial design. I fully acknowledge for other languages (python, php, ruby) we still have a lot of work to do.

Note that I described this as the initial design. Plugins were just introduced in this latest version of Drone and this is by no means the final implementation. If you have suggestions to improve we are definitely open. Also remember that you can write your own plugins, incubate them, and suggest they be included in the official plugin list.

Lastly there are situations, even for private installations, that you don't want the host machines Docker daemon exposed. Some users of Drone operate in highly regulated environments and want their build servers locked down. Some teams run 10+ builds per server and have run into race conditions (tagging and pushing images) when running multiple builds concurrently for the same repository.

We just need to take the above into consideration as we move forward. There are a lot of different use cases we need to consider. And, the good news is, we can have multiple docker plugins for different use cases if we need to optimize for specific languages or workflows.

@chuyskywalker
Copy link

Appreciate the input. I'm actually super excited about Drone -- been wanting something like a self hosted TravisCI for a while now. Jenkins can be...grating.

Some teams run 10+ builds per server and have run into race conditions (tagging and pushing images) when running multiple builds concurrently for the same repository.

Yeah, I was definitely thinking about this. Specifically:

    - docker login -u "$DOCKERHUB_USER" -p "$DOCKERHUB_PASS" -e "$DOCKERHUB_EMAIL"
    - docker push "$IMAGEID"

...is race condition central. Two builds with different credentials get up in the mix and suddenly things are failing/running into each other. In my particular case, there's really only one end point and the build server is the authorized "push" user anyway, so it's a bit simpler. But since we're a centos shop, and have a lot of "layered" builds, I've been working around early adopter issues :)

@vh
Copy link

vh commented Feb 17, 2016

@emilebosch, @chuyskywalker use official Docker in Docker image. It is based on Alpine linux.

build:
  image: docker
  commands:
    - docker login -u "$DOCKER_LOGIN" -p "$DOCKER_PASSWORD" -e "$DOCKER_EMAIL"
    - docker build -t <image> .
    - docker push <image>
  environment:
    - DOCKER_LOGIN=$$DOCKER_LOGIN
    - DOCKER_PASSWORD=$$DOCKER_PASSWORD
    - DOCKER_EMAIL=$$DOCKER_EMAIL
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock

@chuyskywalker
Copy link

use official Docker in Docker image

@budjizu What you described is not "Docker in Docker": it's using the docker cli command to "call out" to a daemon back on the host. This is, essentially, exactly what I and others have done.

"Docker in Docker", which Drone does by default, is actually running a docker daemon inside your container. That new daemon is where we're having trouble getting layers to be cached. This is an issue for us because, without layer caching, many docker build flows are not tenable. (As to why layer caching doesn't work with the load/save trick documented elsewhere, I don't know...)

@vh
Copy link

vh commented Feb 17, 2016

@chuyskywalker Sorry, it might have sounded ambiguous I meant "Docker in Docker image". Not running of course. My example runs a little bit faster, because it is not needed to download docker every build. And the image contains the latest version of Docker (Alpine repo contains outdated 1.6).

@leshik
Copy link

leshik commented Apr 18, 2016

@janeczku, layers caching doesn't work for me either, given the example from docs it loads image.tar into docker, then immediately starts to build Dockerfile from step 1.
I downloaded image.tar to my local machine, loaded it to docker and tried to build, it works flawlessly. Tried to turn on debug, but it didn't help much — there was no any helpful message between loading and building.

@erikgrinaker
Copy link

erikgrinaker commented Apr 20, 2016

Docker 1.10 introduced content-addressable storage, which apparently breaks layer caching after save/load. I'm assuming this is related. See moby/moby#20380

@leshik
Copy link

leshik commented Apr 20, 2016

@erikgrinaker But I thought docker-plugin uses docker 1.9.1, isn't it?

@bradrydzewski
Copy link
Member

I recommend testing out the docker:develop branch which launches docker with /drone/docker as the base directory. This allows you to cache /drone/docker including all image layers.

See this pull request https://github.com/drone-plugins/drone-docker/pull/36/files

@leshik
Copy link

leshik commented Apr 21, 2016

@bradrydzewski Thank you, now it works great!

@lewistaylor
Copy link

lewistaylor commented May 20, 2016

@bradrydzewski @leshik

I might be missing something, but the reason despite loading a cached tar as an image isn't working is due to pull=true always being part of the build command. If I knew any go I would create a pull request, which accepted no_pull or something like that to the drone-docker yml args, unfortunately I don't.

/usr/bin/docker build --pull=true --rm=true -f Dockerfile.build -t org/project:dev .

Help? :-D

@JeffDownie
Copy link

@lewistaylor I came across the exact same issue, and I've made a PR which allows you to now set it as an option, if you want to try it out!

@emilebosch
Copy link
Author

Closnig this for now since its merged in #110

@vwxyzjn
Copy link

vwxyzjn commented Dec 15, 2018

In the newest syntax, something like the following worked for me:

# New syntax
kind: pipeline
name: production

steps:
- name: docker  
  image: docker:18.06.1-ce-dind
  privileged: true
  environment:
    Salt:
      from_secret: Salt
    DOCKER_LAUNCH_DEBUG:
      true
    DOCKER_HOST:
      unix:///var/run/host.sock
    docker_username:
      from_secret: docker_username
    docker_password:
      from_secret: docker_password
  commands:
    - echo $Salt
    # - /usr/local/bin/dockerd-entrypoint.sh
    - docker login -u $docker_username -p $docker_password
    - apk add git
    - sh ./docker-build.sh
  volumes:
  - name: cache
    path: /var/run/host.sock

volumes:
- name: cache
  host:
    path: /var/run/docker.sock

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests