-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress v1 #854
Ingress v1 #854
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jkremser, nice change. Does it make sense to consolidate import identifier netv1
and v1
into one ?
v1 "k8s.io/api/networking/v1"
netv1 "k8s.io/api/networking/v1"
Signed-off-by: Jirka Kremser <[email protected]>
done, everything should be consistently |
@k0da another part of this update would be also bumping the version of Although we don't use the Ingresses nor Services for communicating the DNS records for ext-dns (we use CRs). I can imagine there might be some expectations baked in the external-dns binary. however the tests are green with Would it be hard to rebase our changes on the latest external-dns? What was the repo for this image side note: we might use similar compatibility matrix as ext-dns have in their readme: https://github.com/kubernetes-sigs/external-dns/pkgs/container/external-dns%2Fexternal-dns#kubernetes-version-compatibility |
@jkremser We simply can switch to latest released upstream. As change was finally merged a while ago |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall Looks good. check my comments / questions please.
README.md
Outdated
@@ -37,9 +37,12 @@ spec: | |||
http: | |||
paths: | |||
- backend: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does - backend:
suppose to be empty ?
I wouldn't care but in README.md, for easier orientation we can leave path
as the first item https://github.com/k8gb-io/k8gb/pull/854/files#diff-524f9815d92e58f96cf2ca5d5cb3b2457e7dbf58b7b16d05b10bb2551eb1d041R30-R35
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, should be fixed now
@@ -5,6 +5,7 @@ icon: https://www.k8gb.io/assets/images/icon-192x192.png | |||
type: application | |||
version: v0.8.8 | |||
appVersion: v0.8.8 | |||
kubeVersion: ">= 1.19.0-0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I didn't understand properly. In PR description you mentioned Since [email protected] the Ingress with version extensions/v1beta1 nor networking.k8s.io/v1beta1 are no longer supported . Is >= 1.19.0-0
expected in Helm chart? shouldn't be >= 1.22.0-0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ingress@v1
was introduced in 1.19
and v1Beta1
was deprecated in this version. Since 1.22
v1beta1
is gone. There was an overlap. Check this compatibility matrix, this pretty much holds also for us.
In our case, it's gonna be:
k8gb | <= 0.8.x | >= 0.9.0 (or 1.0.0) |
---|---|---|
Kubernetes <= 1.18 | ✅ | ❌ |
Kubernetes >= 1.19 and <= 1.21 | ✅ | ✅ |
Kubernetes >= 1.22 | ❌ | ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx for clarification @jkremser !
@@ -62,7 +62,7 @@ func (r *GslbReconciler) getServiceHealthStatus(gslb *k8gbv1beta1.Gslb) (map[str | |||
service := &corev1.Service{} | |||
finder := client.ObjectKey{ | |||
Namespace: gslb.Namespace, | |||
Name: path.Backend.ServiceName, | |||
Name: path.Backend.Service.Name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can it happen that path.Backend.Service
is nil ? It might be missing in the ingress spec for instance...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, backend itself can't be nil because it doesn't have the "omitempty" tag, but path.Backend.Service
can be nil (it's path.Service
XOR path.Resource
we support only the Service ref)
If service is there, then the Name has to be specified (again not "omniempty" tag). But to stay on the safe side, I can be paranoid and check everything :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added some checks and logged the warning msg if it's malformed. Also marking the svc health as k8gbv1beta1.NotFound
... in a sense: "the info user gave us was not sufficient to find the service"
Hopefully, it won't be that spammy in the logs. It's going to be there in each reconciliation, but on the other hand, if we "swallowed" the error, it's even worse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, lets keep it as it is and we will see afterwards. Possible candidate for validation would be depresolver_spec.
✅ Deploy Preview for k8gb-preview ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
Signed-off-by: Jirka Kremser <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Fixes #847 (we can't deploy on [email protected] and newer)
All the details are described in the issue ^
This is an API breaking change, because the ingress@v1 resource has a slightly different structure and we use that sub-tree in our own gslb CRD.
Nginx ingress-controller was updated because version 3 works w/ the old ingress (v1beta1)
The idea is that for Ingress@v1beta1 (kubernetes version 1.21 and lower) use k8gb in version
0.8.8
or lower and for Ingress@v1 (since [email protected]), use the> 0.8.8
.