-
-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: move crds to templates #235
Conversation
I updated the commit and moved the crds to their own directory, as stated in the docs. |
Hey @m1pl thanks for the contribution! With helm3 this may in fact be no longer needed :) |
I cannot add this as a suggestion, since you only move the CRD files around, but could you update the apiVersion in the files as well? |
@Demonsthere done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM @aeneasr
Do not merge yet, as the CRDs are broken. Just changing the version as suggested doesn't work. I converted it to a draft. |
Could you elaborate? I was able to install and operate on them without issues. |
@Demonsthere I get a bunch of errors like these:
|
I see, got it too after making a 1.19 or newer k8s cluster. Some api fields in the kind: CRD were updated and this is the result. Changing the versions to the following should be enough: versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
x-kubernetes-preserve-unknown-fields: true
status:
x-kubernetes-preserve-unknown-fields: true |
The schema has changed from v1beta1 to v1, I updated the CRDs to match it. I can install both in my cluster now. Delete the old ones first and try it out. I didn't change the schemas, just the structure to match the new version. |
I think we confused versions here. We want to update I see that the version update may not be as easy as I assumed, maybe it will be better to cut the scope here and stick to moving crds? I will create a new issue regarding updating the version, so we can focus on that there |
@Demonsthere done |
Is there any reason why CRDs are not being directly installed, but through a job? The problem with the current setup is that it breaks cluster automation (e.g. Flux). If someone adds e.g. oathkeeper with maester and some Rules to the Flux repository, the deployment fails as the Rules are being applied before the Jobs are being run and the CRDs are available. If the CRDs are directly available in the Helm chart's templates, the correct order can be ensured.