-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update selector label for pod and service #751
Update selector label for pod and service #751
Conversation
namespace: system | ||
labels: | ||
control-plane: controller-manager | ||
control-plane: opendatahub-operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would we want to take this opportunity to use https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@astefanutti Are you expecting to delete exiting label control-plane
and add common labels i.e. app.kubernetes.io/name
, app.kubernetes.io/part-of
?
@VaishnaviHire @zdtsw what is your thought on it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zdtsw I'll create a separate issue for adding common labels. Let's merge this PR if changes looks ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajaypratap003 yes, ideally we would take the opportunity to add common labels.
As these labels are immutable, the less they are changed the better. Otherwise the deployment name will have to be changed again. It's not technically impossible, but it might be better to keep the iterations minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thats exactly my concern, by changing the label which can cause deployment failure.
thb, i am unsure the change in this PR will work.
i will need to see how the test result is first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but how should we do with downstream? are we going to rename it as well?
QE will need get involved if any test is based on the check on deployment and also we need to ensure monitoring will work after rename.
In general observability is not tight to the deployment name. But you're right, it's possible QE has some automation that depends on that name. So any reference to it will have to be updated accordingly. That's another reason why I'd recommend the common labels be added as part of that fix, if that's what we want, to reduce the overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my personal opinion: if we want to do it then do it in one PR with all the changes. normally to split to several PR is to reduce risk but if eventually this is already changing the name of the deployment, then preferable to get all pieces in. otherwise when deal with the "following" issue will need change deployment name again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely, that'd be my opinion as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@asanzgom Can you please confirm about these label changes, will these changes impact the tests in downstream ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajaypratap003 we do have Test Cases that make use of label selector
ods-ci/ods_ci/tasks/Resources/RHODS_OLM/install/oc_install.robot:184: Run Keyword And Continue On Failure FAIL Timeout- ${output} pods found with the label selector ${label_selector} in ${namespace} namespace
ods-ci/ods_ci/tasks/Resources/RHODS_OLM/uninstall/codeflare_uninstall.resource:43: Log Checking pod with label selector: ${selector}
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:43: [Documentation] Deletes an openshift pod by label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:48: ... No PODS present with Label '${label_selector}' in '${namespace}' namespace, Check the label selector and namespace provide is correct and try again
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:55: [Documentation] Check existence of an openshift pod by label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:74: [Documentation] Get the POD name based on namespace and label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:81: [Documentation] Get the Pod status based on namespace and label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:89: ... namespace and label selector and return the
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:90: ... name of all the pod with matching label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:100: FAIL No POD found with the provided label selector in a given namespace '${namespace}'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see code to support automatic migration for customers moving from previous versions. This is necessary for, among other cases, the Openshift AI managed service.
@etirelli OLM is responsible for migrating over to the new deployment during the operator upgrade. |
@@ -3,7 +3,7 @@ | |||
apiVersion: apps/v1 | |||
kind: Deployment | |||
metadata: | |||
name: controller-manager | |||
name: manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we changing the name of the deployment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VaishnaviHire you can find more context in operator-framework/operator-lifecycle-manager#952 (comment). Changing the Deployment name is a way to avoid the OLM upgrade fails because these labels are immutable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @astefanutti !! Since we are changing the deployment name, we need to make sure we update the tests downstream as well.
6d87260
to
2be786e
Compare
@ajaypratap003: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: asanzgom The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
closing this PR and will handle in epic https://issues.redhat.com/browse/RHOAIENG-3892 as need to handle operator upgrade case. |
Description
Resolve : https://issues.redhat.com/browse/RHOAIENG-405
Update selector label for pod and service to avoid overlap label with other operator pods .
How Has This Been Tested?
Follow Github issue for testing #230
Note: Default selector label has changed in CodeFlare code so you will not able to see the overlap issue. But we need to fixed issue so that it will cause the issue in future with other components.
Merge criteria: