-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce a structured integration tests approach #4745
Comments
I agree we could benefit from having a well-defined structure that every test has to follow 👍 This issue made me wonder whether we could completely replace our integration tests with time and resource-wise cheaper envtests. That could give us the possibility to run integration tests without an actual Kind cluster. Food for thought:
|
I like this idea 👍 One thing to think about is whether we could use this approach for all tests (I'm not sure yet but I doubt) or just a part of them. For instance in places where we'd test resource status conditions etc controllers require particular things to happen like
There might be more and/or more complex conditions for controllers to continue reconciliation of resources into Kong Gateway configuration. I'm not saying this is a blocker but this is something to keep in mind that we'd need to be responsible for mocking this (or rather doing this manually) in code, similarly as we do it in unit tests (like we go in the operator's
That's true and that's a valid point. As an implementation detail I'd say that I'd prefer us to pursue (at some point) using something like https://github.com/kubernetes-sigs/e2e-framework to
Just as I expect, catching the whole idea into a Github issue but not be enough and a design doc of sorts would be helpful 😅 |
Problem statement
With time, KIC has accrued a considerable amount of tests of which the integration tests take the most of the CI time required for a PR to pass its checks. Those tests consistently take above 30 minutes to run.
Moreover, the tests are not well structured or not well defined. It's hard to know what particular feature is tested where and what each failure means.
Manifests used in test code are not easily translatable to manifests which could be applied by users hence it's hard to track what's tested and what's used by the users.
Proposed solution
This issue tries to propose an alternative approach to structure the tests. It might also be used for other types of tests like e.g. e2e tests but for now this will focus solely on integration tests to maximize the impact on tests runtime and developer experience.
The approach proposed in here might be broken down into smaller issues and should be considered to be done incrementally as rewriting the whole testing harness be an overwhelming endeavour.
The approach is as follows:
Use a similar framework as it's used by gateway-api's conformance tests
Meaning, define a structure similar to gateway-api's
suite.ConformanceTestSuite
which will hold fields like:This way tests should have easy to follow descriptions on what they are testing and manifests attached to them (listed in the "suite struct") which could be directly applied to a cluster outside of tests.
Split tests that we currently have into 2 buckets (maybe more if need be):
Those that can share a controller manager instance and Kong Gateway. Those tests will each be run in their own namespace (as it's currently being done)
NOTE: This might not be the desired isolation level for tests as it will require developers to judge if a test can break other tests running in parallel. If that turns out to be the case then we might consider introducing (as a replacement for this bucket or as a separate one) a third bucket for tests relying on e.g.
--watch-namespace
flag and running separate managers and sharing Kong Gateway instance utilizing Kong Workspaces(note: this is an EE feature).Those that cannot share controller manager instance and Kong Gateway. Those tests will require their own manager being started and Kong Gateway deployed (via helm/ktf). Those are going to be predominantly tests that test broken configuration or which require locking a global resource like UDP proxy port.
The former will be placed in one test directory as it's done in the gateway-api conformance tests, while the latter will have their own separate directory - possibly shared - but using a separate setup (and teardown) logic which will run separate manager instance and deploy a separate kong instance.
Migrate tests that are currently defined in
test/integration
one by one to be run as part of the new framework.The above process should be done iteratively to not cause conflicts arising due to long development time (which is expected for this) and to prevent massive review headaches.
Prior art, issues
Related: #4099. Hasn't picked up much steam though.
The text was updated successfully, but these errors were encountered: