Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Using "local" images from the host docker #19

Closed
ashb opened this issue Apr 30, 2019 · 60 comments
Closed

[Enhancement] Using "local" images from the host docker #19

ashb opened this issue Apr 30, 2019 · 60 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed
Milestone

Comments

@ashb
Copy link

ashb commented Apr 30, 2019

It appears that since this is some form of "DinD" that the docker images built locally from the host are not visible to run in the k3s cluster.

Is there anywork around to this?

@zeerorg
Copy link
Collaborator

zeerorg commented May 1, 2019

Currently no way yet. But latest k3s ssupports custom containerd config so we could possibly create a local docker registry pushing to which should be trivial. cc @iwilltry42

@iwilltry42 iwilltry42 added enhancement New feature or request help wanted Extra attention is needed labels May 2, 2019
@iwilltry42
Copy link
Member

A big YES to this feature request!
That's also the most important feature for local development that I'm missing right now.

@Megzo
Copy link

Megzo commented May 3, 2019

As a quick and temporary solution I would suggest to add a -v mount option for k3d, which will be directly passed to the Docker daemon. k3s could then read local images from the /var/lib/rancher/k3s/agent/images directory so you could something like this:

docker save myimage -o $HOME/images/myimage.tar
k3d create -v $HOME/images:/var/lib/rancher/k3s/agent/images

@zeerorg
Copy link
Collaborator

zeerorg commented May 3, 2019

There is already a volume mount feature in the latest update. Checkout k3d create --help

@iwilltry42 iwilltry42 changed the title Using "local" images from the host docker [Enhancement] Using "local" images from the host docker May 6, 2019
@iwilltry42
Copy link
Member

Just to give an update on this: while you can certainly preload the images like @Megzo mentioned, I'd like to have the images shared directly without the need for docker save and containerd import.
I'm digging into image saving formats and paths now to get a grasp on how to do this 👍
Any suggestions are welcome :)

@iwilltry42 iwilltry42 self-assigned this May 7, 2019
@runningman84
Copy link

runningman84 commented May 14, 2019

What about running a registry within k3s? Microk8s offers a similar feature out of the box:
https://itnext.io/working-with-image-registries-and-containerd-in-kubernetes-63c311b86368

@zeerorg
Copy link
Collaborator

zeerorg commented May 19, 2019

@runningman84 , I have the exact same idea. if @iwilltry42 has some time he can look into it. I think the best experience would be creating a local docker registry and forwarding port (5000) so that the user can push their images to this registry and the k3s container should be able to pull images with same prefix.

For example:
I push an image: localhost:5000/newfeature:latest from my command line.
The yaml file refers to: localhost:5000/newfeature:latest and k3s container is able to pull the same image.

@goffinf
Copy link

goffinf commented May 20, 2019

+1 being able to configure a local registry is a MUST HAVE

@ashb
Copy link
Author

ashb commented May 28, 2019

Local registry would be a work-around for me -- ideally would be sharing the same images as the docker host (i.e. by volume mounting in something under /var/lib/docker - but that may not be possible)

@kajanth
Copy link

kajanth commented Jun 1, 2019

could kaniko be used to solve this issue as we can build an image without the dependency on docker daemon

@silasb
Copy link

silasb commented Jun 5, 2019

kaniko is an interesting approach, but I don't see it being much better than docker build ... && docker save ....

Both these solutions require you to store your code in multiple places. With kaniko you'd store it:

  • raw source
  • kaniko container
  • kaniko export artifact (tarball)
  • k3s containerd

I like @iwilltry42 solution of trying to somehow allow Docker file format to work with containerd file formwat. Or what I think he's trying to get to.

Ultimately, I think moby/moby#38043 needs to get merged in before we can easily achieve having docker images shared.

@iwilltry42
Copy link
Member

iwilltry42 commented Jun 11, 2019

So I just started working on this.
Unfortunately it's not that easy and a registry might be the best option for now.
Problems that I faced: we don't have ctr in k3s and the available crictl doesn't have functionality to import images. Then, we cannot easily connect to the containerd.sock inside k3d and use the containerd client, since (you guess it), it's hidden inside the container (or locked by access rights in the local overlayfs of docker).
There's the preloadImages function in k3s that we might ask to expose in a newer version though...

UPDATE 1: I don't think that my original idea to simply share files, won't work, since the image storage formats differ too much and also people might use different storage drivers in docker, which would create the need to support all of them.

I see those two options for now:

  1. have a registry running inside k3d or connect it to a running one
  2. somehow leverage the preloadImage functionality from k3s from the outside

@iwilltry42 iwilltry42 added this to the v2.0 milestone Jun 11, 2019
@kajanth
Copy link

kajanth commented Jun 15, 2019

so we do this for our gitlab pipeline with k3s by mounting the docker volume

script:
stage: script
image: registry.com/image-dind:v1.0.3
script:
- /usr/local/bin/dind -- dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &>docker.log &
- k3s server --disable-agent --no-deploy traefik &>master.log &
- sleep 30
- k3s agent --server https://127.0.0.1:6443 --token-file /var/lib/rancher/k3s/server/node-token --docker &>minion.log &
- cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

@iwilltry42
Copy link
Member

Thanks for the input @kajanth.
Unfortunately the --docker option doesn't work here, since docker is not included in the k3s image that we use here (containerd only). That's why we have the image storage mismatch between host (docker) and k3d (containerd).
But certainly, creating a new image for k3d with e.g. dind and then mounting the dirs could work as well 👍

@silasb
Copy link

silasb commented Jun 22, 2019

Came across another tool that might be interesting: https://github.com/containers/skopeo which supports OCI export. I haven't tried anything yet, but we might be able to mount the k3s OCI directory on the host and export Docker images via skopeo --insecure-policy copy docker://redis:2.8 oci:k3s-oci-images/redis:2.8

@iwilltry42
Copy link
Member

On first glance this looks pretty cool @silasb!
Also, the guys maintaining k3s are thinking about integrating ctr in k3s, so we could leverage its power to import images 👍

@iwilltry42
Copy link
Member

Just to give you an update on this: I created a working version of an import-image command based on docker save and ctr image import that will work as soon as ctr is included in upstream k3s (k3s-io/k3s#590)

@iwilltry42
Copy link
Member

You can check out this release: https://github.com/rancher/k3d/releases/tag/v1.3.0-dev.0
Please read the Release Notes to see how it works.

@silasb
Copy link

silasb commented Jul 1, 2019

@iwilltry42 this is awesome. I see that you are importing the images from the host (by saving to a tar) and then moving them to each worker via ctr image import tarball. It'd be nice to also support piping a tarball directly to import-image. I don't love programming for hypotheticals, but I could see case where you might be exporting an image from a remote docker instance (maybe on another computer you are building an image by using https://github.com/genuinetools/img).

@iwilltry42
Copy link
Member

@silasb good idea, I can certainly see this as a valid use-case 👍
I think it'd be a good thing to add using a --tar flag (or we automatically check for the .tar file extension, but that might introduce complexity of mixed statements).
I will work on this in a follow-up PR after #83 got merged.

@Roming22
Copy link

Roming22 commented Mar 10, 2021

@iwilltry42 Thanks. By adding --registry-config="registry.yml" to the k3d cluster create, I can see the images being cached in the registry as expected.

registry.yml:

mirrors:
  docker.io:
    endpoint:
      - http://k3d-registry.localhost:5000

If I understand correctly, the docker image I've created knows to pull from docker.io for missing images, the --registry-use is to make the existing registry available to the new cluster, and --registry-config is to route the traffic to docker.io to my registry instead.

@bademux
Copy link

bademux commented Apr 10, 2021

Hi,
Sorry for my question, I'm not sure how it supposed to work:
https://k3d.io/usage/guides/registries/#using-a-local-registry
Check the k3d command output ... to find the exposed port
Looks like port randomisation will be problem here that affects usability of dev cluster

docker push k3d-mycluster-registry:12345/testimage:local
There is no k3d-mycluster-registry domain available by default, only localhost:12345.

Upd:
How about some usable defaults that will allow painless k3n (k3d) usage on dev env?
This one blocks human friendly solution moby/moby#38043 (as per #19 (comment) )

@iwilltry42
Copy link
Member

iwilltry42 commented Apr 13, 2021

Hi @bademux , thanks for your input 👍

Looks like port randomisation will be problem here that affects usability of dev cluster

You can use k3d cluster create --port 1234/k3d registry create myregistry.localhost --port 5432 to choose a port

There is no k3d-mycluster-registry domain available by default, only localhost:12345.

Registry names have to match, so you have to use some way to resolve the registry name to localhost, e.g. via /etc/hosts or dnsmasq or similar tools.

@bademux
Copy link

bademux commented Apr 17, 2021

Here straightforward solution how to share docker image pushed into localhost:5000 repository with k3d cluster

Create registry
k3d registry create registry.localhost --port 5000

Create file registries.yaml with content:

mirrors:
  "localhost:5000":
    endpoint:
      - http://k3d-registry.localhost:5000

Create Cluster with registry and expose port
k3d cluster create mycluster -p "8081:80@loadbalancer" --registry-use k3d-registry.localhost:5000 --registry-config registries.yaml

Now you can push docker push localhost:5000/myimage:latest and then use it kubectl run myimage --image localhost:5000/myimage:latest

@CarlosChiarelli
Copy link

CarlosChiarelli commented Jul 18, 2021

It works! Thanks

My file pod:

apiVersion: v1
kind: Pod
metadata:
  name: api-flask-pod
spec:
  containers:
    - name: flask-api
      image: k3d-registry.localhost:5002/api_flask:v0.1
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 5001

@atrakic
Copy link

atrakic commented Jul 19, 2021

Also works for me. Took me a while to figure k3d- prefix:

$ k3d registry list
NAME                     ROLE       CLUSTER   STATUS
k3d-registry.localhost   registry             running  
docker pull nginx:latest
docker tag nginx:latest localhost:5000/nginx:latest
docker push localhost:5000/nginx:latest

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test-registry
  labels:
    app: nginx-test-registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test-registry
  template:
    metadata:
      labels:
        app: nginx-test-registry
    spec:
      containers:
      - name: nginx-test-registry
        image: k3d-registry.localhost:5000/nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF

kubectl get pods -l "app=nginx-test-registry"

@kskdermolab
Copy link

Here straightforward solution how to share docker image pushed into localhost:5000 repository with k3d cluster

Create registry k3d registry create registry.localhost --port 5000

Create file registries.yaml with content:

mirrors:
  "localhost:5000":
    endpoint:
      - http://k3d-registry.localhost:5000

Create Cluster with registry and expose port k3d cluster create mycluster -p "8081:80@loadbalancer" --registry-use k3d-registry.localhost:5000 --registry-config registries.yaml

Now you can push docker push localhost:5000/myimage:latest and then use it kubectl run myimage --image localhost:5000/myimage:latest

Is there any way to do this without create a new cluster?
I also tried k3d images import <local-image> -c <mycluster> but it took too long, even when the image was previously imported.

@ciekawy
Copy link

ciekawy commented Feb 12, 2022

how k8s in docker for mac is able to to access local images? couldn't it be done in the same way?

@iwilltry42
Copy link
Member

@MGReyes , the registries.yaml is translated to the containerd TOML config which cannot be hot-reloaded, so definitely a cluster restart would be necessary. We could look into adding a flag to cluster edit to update the registries config (with a cluster restart), if that's good enough (at least not needing to create a new cluster). WDYT? If that*s fine, please create a feature request.

@iwilltry42
Copy link
Member

how k8s in docker for mac is able to to access local images? couldn't it be done in the same way?

@ciekawy DfD runs Kubernetes with Docker as the container runtime. K3s (which is the Kubernetes distro in k3d) uses plain containerd, so we cannot share the image repository.

@ciekawy
Copy link

ciekawy commented Feb 17, 2022 via email

@iwilltry42
Copy link
Member

Not sure what you mean by proxy in this context, but you can use a local registry.
Unfortunately, docker continues to use a different image storage setup as containerd (even though it uses containerd under the hood, see thread above), so there's not much we can do.

@mgoltzsche
Copy link

mgoltzsche commented Feb 28, 2023

fwiw k3s supports the --docker flag, allowing it to use the host's docker installation (via cri-dockerd nowadays) instead of containerd. Using this flag allows for faster turnarounds when working locally since you don't need to push your locally built images into a registry (and you don't need to couple your local development setup too tightly with k3s/k3d).

@B-Galati
Copy link

B-Galati commented Jun 9, 2023

@mgoltzsche How would you setup that to work with K3D? Would it be possible to mount a volume inside K3D cluster to share local docker images with the cluster?

@mgoltzsche
Copy link

mgoltzsche commented Jun 14, 2023

To make the docker integration work, the --docker CLI option must be provided to k3s' server command (making it use cri-dockerd instead of containerd), the docker.sock must be mounted at /var/run/docker.sock, the /var/lib/cni directory must be mounted as well as the following directories which require bidirectional mount propagation:

  • /var/lib/docker
  • /var/lib/kubelet

Here is how to run a k3s server (single node cluster) using docker directly:

mkdir -p /var/lib/rancher/k3s /var/lib/kubelet /var/lib/cni
docker run --rm --privileged --network=host --pid=host \
	--tmpfs=/run --tmpfs=/var/run \
	--mount type=bind,src=/etc/machine-id,dst=/etc/machine-id \
	--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
	--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,bind-propagation=rshared \
	--mount type=bind,src=/var/lib/kubelet,dst=/var/lib/kubelet,bind-propagation=rshared \
	--mount type=bind,src=/var/lib/cni,dst=/var/lib/cni \
	--mount type=bind,src=/sys,dst=/sys \
	--mount type=bind,src=/var/lib/rancher/k3s,dst=/var/lib/rancher/k3s,bind-propagation=rshared \
	--mount type=bind,src="`pwd`",dst=/output \
	-e K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml \
	-e K3S_KUBECONFIG_MODE=666 \
	rancher/k3s:v1.27.2-k3s1 server --docker

However, I cannot make this work using k3d since it doesn't let me specify the mount propagation since it does not expose docker's --mount option but the -v option only (and since it would work for a single node cluster only). In absence of those issues the command could look as follows (not working!):

k3d cluster create mycluster --servers=1 --agents=0 --k3s-arg='--docker@server:0' -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker -v /var/lib/kubelet:/var/lib/kubelet -v /var/lib/cni:/var/lib/cni

@B-Galati
Copy link

Thanks @mgoltzsche!

@iwilltry42
Copy link
Member

@mgoltzsche -v /var/lib/docker:/var/lib/docker:rshared@server:0 for propagation opts 👍

In general, there is experimental support for using the containerd-snapshotter in Docker v24 - so we could give that a try at some püoint.

@vaggeliskls
Copy link

vaggeliskls commented Feb 14, 2024

@mgoltzsche @mgoltzsche Have you come up with any solution about using the k3s --docker argument on k3d ?
Version:

k3d version v5.6.0
k3s version v1.27.4-k3s1 (default)

I have made 2 tries:

  1. k3d cluster create evoml --servers=1 --agents=0 --k3s-arg "--disable=traefik@server:0" --k3s-arg='--docker@server:0' -v /var/run/docker.sock:/var/run/docker.sock@server:0 -v /var/lib/docker:/var/lib/docker:rshared@server:0 -v /var/lib/kubelet:/var/lib/kubelet:rshared@server:0 -v /var/lib/cni:/var/lib/cni@server:0
    It stucks on INFO[0011] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap..
  2. k3d cluster create evoml --network host --k3s-arg "--disable=traefik@server:0" --k3s-arg='--docker@server:0' -v /var/run/docker.sock:/var/run/docker.sock@server:0 -v /var/lib/docker:/var/lib/docker:rshared@server:0 -v /var/lib/kubelet:/var/lib/kubelet:rshared@server:0 -v /var/lib/cni:/var/lib/cni@server:0
    The creation of the single node is success, but load balancer is not created
INFO[0000] [SimpleConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default
INFO[0000] [ClusterConfig] Hostnetwork selected - disabling injection of docker host into the cluster, server load balancer and setting the api port to the k3s default

So i cannot connect to the node and see the pods.
E0214 11:42:13.539408 688265 memcache.go:265] couldn't get current server API group list: Get "https://0.0.0.0:6443/api?timeout=32s": dial tcp 0.0.0.0:6443: connect: connection refused

@mgoltzsche
Copy link

mgoltzsche commented Feb 15, 2024

On my machine (Ubuntu) cluster creation using your commands succeeds but no pod/container starts afterwards due to a CNI error:

$ kubectl describe pod -l k8s-app=kube-dns -n kube-system
...
  Warning  FailedCreatePodSandBox  63s (x4 over 66s)   kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "752a6925128f6a1438216c6fb72d9845613917efba30538959d1637548087d1b" network for pod "coredns-77ccd57875-x4nb2": networkPlugin cni failed to set up pod "coredns-77ccd57875-x4nb2_kube-system" network: plugin type="loopback" failed (add): failed to Statfs "/proc/123862/ns/net": no such file or directory

Adding the --host-pid-mode option (corresponding to docker's --pid=host) to the cluster creation command solves the problem:

k3d cluster create mycluster --servers=1 --agents=0 --network host --host-pid-mode --k3s-arg='--disable=traefik@server:0' --k3s-arg='--docker@server:0' -v /var/run/docker.sock:/var/run/docker.sock@server:0 -v /var/lib/docker:/var/lib/docker:rshared@server:0 -v /var/lib/kubelet:/var/lib/kubelet:rshared@server:0 -v /var/lib/cni:/var/lib/cni@server:0

The load balancer is not created since the option --k3s-arg='--disable=traefik@server:0' disables it.

@ligfx
Copy link
Contributor

ligfx commented Feb 27, 2024

I ran into this issue as well (came up because I kept hitting docker.io rate limits), so created a custom registry that proxies all image requests to the host docker instance: https://github.com/ligfx/k3d-registry-dockerd

I use it like so:

configfile=$(mktemp)
cat << HERE > "$configfile"
apiVersion: k3d.io/v1alpha5
kind: Simple
registries:
  create:
    image: ligfx/k3d-registry-dockerd:v0.1
    proxy:
      remoteURL: "*"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
HERE
k3d cluster create mytest --config "$configfile"

It's also had the side effect of significantly speeding up cluster creation and pod rollout, which is nice!

@cowwoc
Copy link

cowwoc commented Jan 26, 2025

@ligfx Dude, I could kiss you. Your solution has saved me tons of time and reduced the pain of building a new cluster. Thanks a lot!!

@iwilltry42
Copy link
Member

@ligfx this looks like a pretty smooth solution, thank you! - I wonder if we could tightly integrate this into k3d.
Maybe this effort can even be combined (if necessary) with the K3s embedded Spegel registry: https://docs.k3s.io/installation/registry-mirror

@cowwoc
Copy link

cowwoc commented Jan 28, 2025

@iwilltry42 That would be great. The only complaint I have about https://github.com/ligfx/k3d-registry-dockerd is that it makes it harder for me to find my own images when I run docker image list. But it's hard to argue with the ease-of-use. It is way ahead of all other solutions.

Do you plan to open a separate issue to track this feature request?

@ligfx
Copy link
Contributor

ligfx commented Feb 14, 2025

@iwilltry42 I’m open to that. How do you envision that working?

@iwilltry42
Copy link
Member

Tracking over here: #1555

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests