Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom CRIO RuntimeClass (runsc) - How to load it properly ? #10242

Closed
h4n0sh1 opened this issue Jan 23, 2021 · 7 comments
Closed

Custom CRIO RuntimeClass (runsc) - How to load it properly ? #10242

h4n0sh1 opened this issue Jan 23, 2021 · 7 comments
Labels
addon/gvisor co/runtime/crio CRIO related issues kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@h4n0sh1
Copy link

h4n0sh1 commented Jan 23, 2021

Hi,

I have a working minikube cluster inside my debian VMware (kali rolling 2020, debian 10 - buster), using virtualbox as a driver (nested virtualisation).

I'm trying to run a pod with a custom crio compatible runtime compiled from source: it's an old version of runsc pulled from the 01 August 2018 of gvisor's repo.

There was a similar attempt that actually worked here

Steps to reproduce the issue:

  1. minikube start --vm-driver=virtualbox --container-runtime=crio --cri-socket=/var/run/crio/crio.sock
  2. pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: gvisor-crio
spec:
    runtimeClassName: runsc
    containers:
    - name: gvisor-crio
      image: ubuntu
      command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
  1. /etc/crio/crio.conf (in my kali)
[crio.runtime.runtimes.runsc]
runtime_path = "/usr/local/bin/runsc"
runtime_type = "oci"
  1. systemctl restart crio

  2. $(pwd)/runsc-runtimeclass.yaml

apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
  name: runsc
handler: runsc
  1. kubectl apply -f /runsc-runtimeclass.yaml

  2. kubectl apply -f pod.yaml

Full output of failed command:

Events:
  Type     Reason                  Age                  From               Message
  ----     ------                  ----                 ----               -------
  Normal   Scheduled               4m40s                default-scheduler  Successfully assigned default/gvisor-crio to minikube
  Warning  FailedCreatePodSandBox  3s (x22 over 4m40s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to find runtime handler runsc from runtime list map[runc:0xc0000d5800]

From there i realised that the crio conf wasn't picked up by the minikube vm, so i sshed into it, and transfered both my crio conf and the precompiled runtime, it's very crazy of course and it didn't work as expected, it complained about some missing parameters for a systemd-cgroup flag.

There must be a better way to achieve what i'm tyring to realise, the bottom line being : i just want to use my old custom gvisor runtime, in a minikube cluster, using CRIO and not containerd.

Full output of minikube start command used, if not already included:

😄 minikube v1.15.1 on Debian kali-rolling
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🎁 Preparing Kubernetes v1.19.4 on CRI-O 1.18.3 ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Jan 25, 2021
@sharifelgamal sharifelgamal added the co/runtime/crio CRIO related issues label Feb 3, 2021
@priyawadhwa
Copy link

Hey @h4n0sh1 thanks for opening this issue. I'm not totally sure why this isn't working but I was wondering if you've tried our gvisor addon? It may not be exactly what you're looking for if you're trying to use a custom version of runsc, but perhaps looking into the implementation could provide some clues about how to integrate your custom runtime.

The addon can be enabled via

minikube addons enable gvisor

and the code for the gvisor addon lives here:

  1. https://github.com/kubernetes/minikube/blob/master/pkg/gvisor/enable.go

Please let me know if that helps at all!

@spowelljr spowelljr added long-term-support Long-term support issues that can't be fixed in code and removed triage/long-term-support labels May 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 16, 2021
@sharifelgamal
Copy link
Collaborator

Our CRI-O runtime is still a work in progress, but I believe we have made some big improvements in the past few months. Could you try upgrading to the newest version of minikube and trying again?

@sharifelgamal sharifelgamal removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2022
@sharifelgamal
Copy link
Collaborator

I'm going to go ahead and close this issue for now. Please reopen it if you retry this with a newer version of minikube and still have issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/gvisor co/runtime/crio CRIO related issues kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests

6 participants