Skip to content
This repository has been archived by the owner on Mar 30, 2023. It is now read-only.

wip k3s support #18

Closed
wants to merge 2 commits into from
Closed

wip k3s support #18

wants to merge 2 commits into from

Conversation

xavierzwirtz
Copy link

@xavierzwirtz xavierzwirtz commented Jan 28, 2020

Very very draft k3s support. This adds a distro setting to the test, allowing you to swap between nixos and k3s. k3s is mostly working, however there are a few issues:

  • k3s allows you to pass in--docker and use docker as the container runtime. I was unable to get this working, when docker would go to run an image it would print an error about to many symbolic link layers. This is now working.
  • I do not know how the local dns should work, so I have been unable to get the curl request to nginx to actually succeed. This is now working.

I think other than that the bones of k3s is working. Happy to work on this more to get it up to snuff.

@xavierzwirtz
Copy link
Author

Only issue that keeps the k8s-deployment test from passing with k3s is the dns now. Not sure what the issue is there yet.

@xavierzwirtz
Copy link
Author

DNS issue is now resolved. Single node tests can now be run on k3s by adding distro = "k3s"; to the tests config. I duplicated the k8s-deployment test as k8s-deployment-k3s. @offlinehacker I would be interested to hear your thoughts on a backend independent testing abstraction. The way this is implemented now works, but feels kludgy.

@offlinehacker
Copy link
Contributor

offlinehacker commented Jan 31, 2020

Thank you for initial implementation of k3s integration. While the whole k3s/nixos integration is generally good, I think testing framework needs to be replaced. I was thinking to do the following:

Use kubetest, python framework for testing:

Using nixos specific framework has different limitations, as being coupled on nixos and also perl based nixos tests are being replaced with python tests.
Using kubetest, would not only allow to run tests without running nixos inside vm, but to also be completely independent of k8s distro. Also all different libraries in Python can be used (imagine connecting to database or queue within test).

Integrate telepresence

Telepresence allows to make a tunnel to kubernetes and make a service behave like it's running inside kubernetes. We will have to set socks proxy inside python process and use LD_PRELOAD for spawned processes. Integration of kubetest with telepresence would also be nice to have.

Docker?

One question is why do we need to use docker as additional layer, since docker is already using containerd and just adds additional complexity. I am completely ok with running k3s using containerd, or did you had some other issues?

I am currently on vacation for roughly one more week and will be able to work on test refactoring when I get back and integrate your work.

@xavierzwirtz
Copy link
Author

Containerd worked great, I didn't want to break compatibility with any of your existing tests that were using docker commands though. Swapping back to containerd is easy, just have to remove --docker and switch over to the containerd commands for loading images.

Kubetest and telepresence look interesting, I don't think I have used kubenix enough yet to give an educated opinion though. I know what I really love about how things currently work is not having to worry about any state carrying over between test runs. It has made learning kubernetes so much easier.

@xavierzwirtz
Copy link
Author

The more I use the current testing framework, the more frustrated I become with it. The current issue I am having, is k3s does not include all images needed to use local-path storage classes in its airgap image set. This took a while to discover though, because NixOS testing library disables the airgap firewall rules when you are using .driver to run the tests with a debug repl. It feels like for a pleasant user experience, first class support for running tests on a real kubernetes cluster is critical. Otherwise there will be a constant friction between the kubernetes distro you use for testing, and the kubernetes distro you are deploying to. This is probably just reiterating thoughts you have already had yourself.

@blaggacao
Copy link

I'd love to see this move forward as k3s is (not only) my go to choice for k8s deployments.

@offlinehacker
Copy link
Contributor

I am not working on this actively, as I use other tooling for kubernetes deployments currently (pukumi), but I am happy to review and merge if some has interest in this project

@blaggacao
Copy link

blaggacao commented Feb 21, 2021

I am not working on this actively, as I use other tooling for kubernetes deployments currently (pukumi), but I am happy to review and merge if some has interest in this project

I see, would you be open to cede maintainership / ownership on kubenix to somebody willing to evolve it further?

@offlinehacker
Copy link
Contributor

Yes, I would be very open for that, and I would would be very happy someone continues efforts 🙂
I would be also open to discuss ideas, as I have some experience building and refactoring this project and different issues I had.

@adrian-gierakowski
Copy link
Contributor

@offlinehacker did you mean https://www.pulumi.com? May I ask what was the reason for choosing it over kubenix? Thanks!

@adrian-gierakowski
Copy link
Contributor

Btw I’d be happy to work on developing kubenix as I use it at work and should be able to commit a significant amount of time to it. Happy to hop on a call to discuss this further.

@colemickens
Copy link

@offlinehacker I'm also curious if you're using Pulumi + Nix in any novel way, or just regular Pulumi+TS to manage resources?

(re: kubenix: I've always been on the edge of adopting kubenix but wasn't sure if it had other (non-@offlinehacker) users or a future. Knowing others are interested, even just this much, helps alleviate some of that fear.

@blaggacao
Copy link

blaggacao commented Feb 21, 2021

I'm going to go all in on kubenix. It is "brilliant" (quote zimbatm). In my opinion, a generalized nix (and later nickel) DSL is strategically superior to (special purpose) pulumi ts based "DSL", I think. (espeically so in the context of for example: divnix/digga#130 — where people not only would want to manage k8s, but the whole environment).

@blaggacao
Copy link

Btw I’d be happy to work on developing kubenix as I use it at work and should be able to commit a significant amount of time to it. Happy to hop on a call to discuss this further.

@offlinehacker You could just transfer the repo to @adrian-gierakowski (Adrian would need to rename his current fork first). Would that be something? I'm currently very engaged in the offline world, but you can definitely expect input similar to what I'm currently doing on divnix/devos.

@offlinehacker
Copy link
Contributor

offlinehacker commented Feb 21, 2021

@colemickens

I am just using pulumi and mostly using kubernetes operators. I gave up on trying to maintain my own nix based ecosystem, as I don't see that many benefits. I had ideas to not only build static resources using kubenix, but to also have dynamic kubernetes operators running using nix expressions, but I lose some motivation. I hope someone else can continue these efforts and make it more usable.

Here is quite advanced pulumi example I use https://github.com/xtruder/pulumi-extra/blob/master/resources/k8s/postgres-operator.ts This is something that is not possible using only static generation kubenix does, as it does quite some dynamic orchestration.

@blaggacao
Copy link

blaggacao commented Feb 21, 2021

@offlinehacker Does this operator run within the cluster? I believe anything that goes into the direction of an operator is out-of-reach for a (declarative) configuration language, while deploying that operator would probably be in-scope.

@adrian-gierakowski
Copy link
Contributor

@blaggacao I've renamed my fork, but maybe it would be better to create a kubenix org? Unfortunately the name seems to be taken. @offlinehacker have you created that org? Btw. I'd be able to start working on this next week. Shall we arrange a call to discuss the direction in which we'd like to take this project?

@blaggacao
Copy link

Shall we arrange a call to discuss the direction in which we'd like to take this project?

Plese set a time that is ok for you, I can adapt completely.

Click the following link to join the meeting:
https://meet.jit.si/kubenix

=====

Click this link to see the dial in phone numbers for this meeting:
https://meet.jit.si/static/dialInInfo.html?room=kubenix

@blaggacao
Copy link

Maybe it could go under nix-community?

@adrian-gierakowski
Copy link
Contributor

Maybe it could go under nix-community?

sounds ok, what do you think @offlinehacker

Shall we arrange a call to discuss the direction in which we'd like to take this project?

Plese set a time that is ok for you, I can adapt completely.

I very flexible as well. @offlinehacker do you think you'd be able to find some time for this anytime this or next week? If so, I'd defer to you regarding picking the time for the meeting.

@blaggacao
Copy link

@adrian-gierakowski
Copy link
Contributor

@offlinehacker I’m planning to dedicate 2-3 days next week to working on kubenix and I think it would be really helpful to get some input from you before kicking off. Do you think we could have a chat sometime next Tuesday/Wednesday? Thanks!

@adrian-gierakowski
Copy link
Contributor

@offlinehacker I understand that you might be busy, so it would be great if you could at least give your blessing for me to post a message on NixOS Discourse announcing that I’m going to work on developing kubenix and asking for feedback from the community regarding the roadmap.

I also really don’t mind in which repo the development continues. Nix-community or kubenix gh org seem preferable, but I’d be just as happy for the project to stay where it is, as long as I’m added as a maintainer.

Thanks!

@blaggacao blaggacao mentioned this pull request Apr 29, 2021
Closed
@blaggacao
Copy link

I combined this with #27: #29

@blaggacao
Copy link

@xavierzwirtz What was the original reason not to use nixpkgs' services.k3s? Didn't exist at the time of writing?

@xavierzwirtz
Copy link
Author

xavierzwirtz commented May 6, 2021 via email

@offlinehacker
Copy link
Contributor

This repo has been deprecated, since I stopped maintaining it some time ago. There is a fork maintained by @hall available at https://github.com/hall/kubenix, that has better documentation and looks like a way further.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants