-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add recommended k8s labels #1327
Conversation
c4c2e8d
to
1ccad39
Compare
/cc @vdemeester |
6db51de
to
f138f32
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My plan was to test the about page still works ok for picking up versions on Docker Desktop and OpenShift 4.3 with Tekton installed through the release asset and the OpenShift Pipelines operator
Versions (for Pipellines) to be picked up being 0.10.2, 0.11.3, 0.12.0
Docker Desktop: install 0.10.2 of Pipelines, failed to get version. Labels present for Tekton controller:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: tekton-pipelines
tekton.dev/release: v0.10.2
I noticed we have
// NOTE: *.tekton.dev/release and version labels are deprecated in favor of app.kubernetes.io/version
// They are kept here for backward compatibility and should be removed at some point
labelsToCheck := []string{versionLabel, application + ".tekton.dev/release", "version"}
🤔
@eddycharly please check the versions of Pipelines are still being picked up OK outside of OpenShift, if you don't have OpenShift access I'm happy to check nothing breaks there for you, thanks
Thanks for testing it ! It looks like tekton.dev/release label is missing in the list. I would be happy if you can test this on openshift as I don’t know much about it. |
Possibly, it's worth giving it a try yeah, once you've got something we know works ok for the non-OpenShift environments I'm happy to go through the matrix 😄 |
7dd6996
to
f2f9787
Compare
@a-roberts I added support for the |
Thanks, won't have time to test this in-depth today but if you can verify what you can (e.g. on Minishift) that'd really help, rest of the PR's looking good, we'll want to make sure nothing breaks for Triggers (0.3.1 and later) either 👍 |
I tested it successfully with:
I will try to test with minishift during the weekend if I have some free time. |
Thank you! No worries if you can't I'll give it a good go on OpenShift Monday anyway, that's great 😄 |
It worked with minishift for:
I fixed an issue with pipelines 0.10.0 (strangely the issue didn't show up on k8s). I couldn't test other versions as they require a k8s cluster 1.15. |
Had a go with the latest operator for installing Tekton on OpenShift (openshift-pipelines-operator) today, devel appears as the version for both the dev channel and canary releases, here's the pipeline controller yaml so you can see the label used
The same thing happens with dev-channel or canary, so we could augment this PR to check for pipeline.tekton.dev/release as well, or just labels.version. @eddycharly you're welcome to split this into PRs if you'd prefer (so labelling our Dashboard stuff accordingly, versus the lookup changes) For Triggers, it's installing 0.2.1 currently which has no awesome labels for us to lookup, but will assume it'll be of the same pattern (@vdemeester ^^) |
Boouh, we need to clean this but keeping the backward compatibility is a mess 😢 Alright, i will spit the PRs, it should help moving forward. |
Agreed, thanks 😸 |
@a-roberts i removed the lookup related changes, the PR should be all about labels now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question
template: | ||
metadata: | ||
name: tekton-dashboard | ||
labels: | ||
app.kubernetes.io/name: dashboard | ||
app.kubernetes.io/component: dashboard | ||
app.kubernetes.io/instance: default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why default
for instance and not dashboard? All other changes look good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The app.kubernetes.io/instance
label shouldn't express the application itself but the instance of the application, i feel like these manifests express the default
deployment of the dashboard
application.
Also it's more consistent, i did the same thing for pipelines
and triggers
(setting app.kubernetes.io/instance
to default
). I guess it will be easier to remember when trying to lookup a resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, when deploying with helm i set the value to the release name : https://github.com/tektoncd/experimental/pull/528/files#diff-718e9764773438f64515998a5c46e5a1R50
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks for the clarifications
/lgtm |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: a-roberts The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR adds recommended k8s labels on resources.
Implementation for proposal tektoncd/pipeline#2497
Related PR in pipelines: tektoncd/pipeline#2501
Base: added labels and updated selectors for all resources in
/base
Go: changed go code in
/pkg/endpoints/dashboard.go
NOTES:
/pkg/endpoints/cluster.go
. The code there is bound to ingresses which is not part of this repository. I will address this code in another PR.