-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add drenv integration tests using podman driver #679
Conversation
900b8f7
to
b2a0e50
Compare
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 | ||
sudo install minikube-linux-amd64 /usr/local/bin/minikube | ||
minikube version | ||
mkdir "$HOME/.minikube/profiles" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a workaround for minikube bug, will report it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minikube bug: kubernetes/minikube#15593
dda4592
to
88c987d
Compare
Coverage report before this change:
Coverage report with this change:
|
Instead of hard coding the "kvm2" driver. This will allow creating clusters using different drivers. One interesting case is the example deployment that does not have any reason to use virtual machines. Another interesting cases is the hub clusters which should work with podman driver. Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
If not set, let minikube pick a container runtime instead of defaulting to `containerd`. This is more consistent with other options. Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
The example deployment does need virtual machine. With this change we may be able to start it in github actions. Use `cri-o` runtime, recommended by minikube docs when using `podman` driver[1]. [1] https://minikube.sigs.k8s.io/docs/drivers/podman/ Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
Add a minimal test environment for running system tests on github actions. The environment is using one tiny cluster to make the test less likely to fail in constrained CI setup. Add minikube and kubectl to the drenv test job in github to allow running the new environment during the tests. Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
Add a reusable `tmpenv` fixture, running the test environment during a test session. Using the live test cluster we run integration tests for commands like kubectl. Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
Add integration tests for the kubectl module, using the tmpenv fixture. Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
First enable coverage for child processes, so we get coverage report from the drenv tool and from script run by the drenv tool. This is a bit tricky, requiring enabling coverage for child process using a .pth file and using coverage parallel mode, combining output from multiple processes. With this change we report now all code running during the tests: $ make coverage python3 -m coverage report Name Stmts Miss Cover ----------------------------------------- drenv/__init__.py 77 48 38% drenv/__main__.py 109 23 79% drenv/clusteradm.py 41 41 0% drenv/commands.py 93 2 98% drenv/envfile.py 80 0 100% drenv/kubectl.py 31 0 100% ----------------------------------------- TOTAL 431 114 74% Part-of: RamenDR#677 Signed-off-by: Nir Soffer <[email protected]>
container_runtime
to minikube defaultThe test fixture is using the drenv tool to start and stop the environment
covering most of the code in the tool.
Fixes #677