Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update selector label for pod and service #751

Closed
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
labels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
name: datascienceclusters.datasciencecluster.opendatahub.io
spec:
group: datasciencecluster.opendatahub.io
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
labels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
name: dscinitializations.dscinitialization.opendatahub.io
spec:
group: dscinitialization.opendatahub.io
Expand Down
3 changes: 3 additions & 0 deletions bundle/manifests/features.opendatahub.io_featuretrackers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
labels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
name: featuretrackers.features.opendatahub.io
spec:
group: features.opendatahub.io
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ kind: Service
metadata:
creationTimestamp: null
labels:
control-plane: controller-manager
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
name: opendatahub-operator-controller-manager-metrics-service
spec:
ports:
Expand All @@ -12,6 +14,8 @@ spec:
protocol: TCP
targetPort: 8080
selector:
control-plane: controller-manager
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
status:
loadBalancer: {}
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,7 @@ data:
port: 9443
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
name: opendatahub-operator-manager-config
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
name: opendatahub-operator-metrics-reader
rules:
- nonResourceURLs:
Expand Down
15 changes: 10 additions & 5 deletions bundle/manifests/opendatahub-operator.clusterserviceversion.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1654,21 +1654,26 @@ spec:
serviceAccountName: opendatahub-operator-controller-manager
deployments:
- label:
control-plane: controller-manager
name: opendatahub-operator-controller-manager
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
name: opendatahub-operator-manager
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
strategy: {}
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
name: opendatahub-operator
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
spec:
containers:
- args:
Expand Down
5 changes: 3 additions & 2 deletions config/default/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,9 @@ namespace: opendatahub-operator-system
namePrefix: opendatahub-operator-

# Labels to add to all resources and selectors.
#labels:
# someName: someValue
commonLabels:
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub

resources:
- ../crd
Expand Down
2 changes: 1 addition & 1 deletion config/default/manager_auth_proxy_patch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
name: manager
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we changing the name of the deployment?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VaishnaviHire you can find more context in operator-framework/operator-lifecycle-manager#952 (comment). Changing the Deployment name is a way to avoid the OLM upgrade fails because these labels are immutable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @astefanutti !! Since we are changing the deployment name, we need to make sure we update the tests downstream as well.

namespace: system
spec:
template:
Expand Down
13 changes: 8 additions & 5 deletions config/manager/manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,25 @@ metadata:
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
name: manager
namespace: system
labels:
control-plane: controller-manager
control-plane: opendatahub-operator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@astefanutti Are you expecting to delete exiting label control-plane and add common labels i.e. app.kubernetes.io/name , app.kubernetes.io/part-of ?

@VaishnaviHire @zdtsw what is your thought on it ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zdtsw I'll create a separate issue for adding common labels. Let's merge this PR if changes looks ok.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ajaypratap003 yes, ideally we would take the opportunity to add common labels.

As these labels are immutable, the less they are changed the better. Otherwise the deployment name will have to be changed again. It's not technically impossible, but it might be better to keep the iterations minimum.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thats exactly my concern, by changing the label which can cause deployment failure.
thb, i am unsure the change in this PR will work.
i will need to see how the test result is first.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but how should we do with downstream? are we going to rename it as well?
QE will need get involved if any test is based on the check on deployment and also we need to ensure monitoring will work after rename.

In general observability is not tight to the deployment name. But you're right, it's possible QE has some automation that depends on that name. So any reference to it will have to be updated accordingly. That's another reason why I'd recommend the common labels be added as part of that fix, if that's what we want, to reduce the overhead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my personal opinion: if we want to do it then do it in one PR with all the changes. normally to split to several PR is to reduce risk but if eventually this is already changing the name of the deployment, then preferable to get all pieces in. otherwise when deal with the "following" issue will need change deployment name again?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely, that'd be my opinion as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asanzgom Can you please confirm about these label changes, will these changes impact the tests in downstream ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ajaypratap003 we do have Test Cases that make use of label selector

ods-ci/ods_ci/tasks/Resources/RHODS_OLM/install/oc_install.robot:184:        Run Keyword And Continue On Failure    FAIL    Timeout- ${output} pods found with the label selector ${label_selector} in ${namespace} namespace
ods-ci/ods_ci/tasks/Resources/RHODS_OLM/uninstall/codeflare_uninstall.resource:43:    Log  Checking pod with label selector: ${selector}
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:43:    [Documentation]    Deletes an openshift pod by label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:48:    ...    No PODS present with Label '${label_selector}' in '${namespace}' namespace, Check the label selector and namespace provide is correct and try again
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:55:    [Documentation]    Check existence of an openshift pod by label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:74:    [Documentation]    Get the POD name based on namespace and label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:81:    [Documentation]    Get the Pod status based on namespace and label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:89:    ...    namespace and label selector and return the
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:90:    ...    name of all the pod with matching label selector
ods-ci/ods_ci/tests/Resources/Page/OCPDashboard/Pods/Pods.robot:100:         FAIL    No POD found with the provided label selector in a given namespace '${namespace}'

spec:
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
replicas: 1
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
name: opendatahub-operator
app.kubernetes.io/name: opendatahub-operator
app.kubernetes.io/part-of: opendatahub
control-plane: opendatahub-operator
spec:
securityContext:
runAsNonRoot: true
Expand Down
4 changes: 2 additions & 2 deletions config/rbac/auth_proxy_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: v1
kind: Service
metadata:
labels:
control-plane: controller-manager
control-plane: opendatahub-operator
name: controller-manager-metrics-service
namespace: system
spec:
Expand All @@ -12,4 +12,4 @@ spec:
protocol: TCP
targetPort: 8080
selector:
control-plane: controller-manager
control-plane: opendatahub-operator
2 changes: 1 addition & 1 deletion tests/e2e/odh_manager_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ func testODHOperatorValidation(t *testing.T) {

func (tc *testContext) testODHDeployment(t *testing.T) {
// Verify if the operator deployment is created
require.NoErrorf(t, tc.waitForControllerDeployment("opendatahub-operator-controller-manager", 1),
require.NoErrorf(t, tc.waitForControllerDeployment("opendatahub-operator-manager", 1),
"error in validating odh operator deployment")
}

Expand Down
Loading