Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] CRs Created Outside Operator Namespace #214

Closed
M4C4R opened this issue Aug 1, 2024 · 4 comments · Fixed by #218
Closed

[BUG] CRs Created Outside Operator Namespace #214

M4C4R opened this issue Aug 1, 2024 · 4 comments · Fixed by #218
Assignees
Labels
Enhancement New feature or request

Comments

@M4C4R
Copy link

M4C4R commented Aug 1, 2024

Describe the bug
CRs (custom resources) can be created outside the operator namespace, but the operator lacks permissions to fulfill the requests as it attempts to create the jobs/services within the namespace of the CR.

Exception occurred during Job creation: Failure executing: PATCH at: ... Message: jobs.batch "demo-test-master" is forbidden: User "system:serviceaccount:locust:locust-locust-k8s-operator" cannot patch resource "jobs" in API group "batch" in the namespace "...".

Exception occurred during Service deletion: Failure executing: DELETE at: ... Message: services "demo-test-master" is forbidden: User "system:serviceaccount:locust:locust-locust-k8s-operator" cannot delete resource "services" in API group "" in the namespace "...".

To Reproduce
Steps to reproduce the behaviour:

  1. Deploy the operator in namespace X
  2. Create a locust test in namespace Y
  3. Observe logs of operator

Expected behaviour
I expected the jobs/services to be created within the namespace of the operator, not of the CR.

This is as granting the following at a cluster level is excessive:

  - apiGroups: [ '*' ]
    resources:
      - services
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: [ '*' ]
    resources:
      - jobs
    verbs:
      - get
      - list
      - create
      - update
      - delete
      - patch

Additional context:
We plan on bundling our locust tests with our application charts, so they will be deployed into the same namespace as the application - not the operator.

@AbdelrhmanHamouda AbdelrhmanHamouda self-assigned this Aug 28, 2024
@AbdelrhmanHamouda
Copy link
Owner

Hello @M4C4R,
Thank you for reporting this issue. So few things to unpack here;

  • The Operator "attempting" to deploy the resources in the same namespace the CR was deployed in, is actually a feature not a bug. There are several reasons for this but on of them is to make writing a test easier since the user wont need to worry about k8s namespace networking when trying to figure out which API to call.
  • If the desired effect is to have the resources deployed in the same namespace as the operator, this can be simply done by deploying the CR itself to that namespace. I understand if you are bundling everything in a HELM, this proposal won't be easy to do and that takes me to my next point
  • Originally I was avoiding cluster role binding for security reasons (new project with little history trying to deploy a cluster role), however, as time passed by and the project now has more history and backing, i'm comfortable adding a HELM controlled feature where the user or org get to enable/disable that cluster role

does that address all your points?

@M4C4R
Copy link
Author

M4C4R commented Aug 28, 2024

Hi

The Operator "attempting" to deploy the resources in the same namespace the CR was deployed in, is actually a feature not a bug. There are several reasons for this but on of them is to make writing a test easier since the user wont need to worry about k8s namespace networking when trying to figure out which API to call.

Understood, but as it lacks cluster-wide access by default, it may as well only watch its own namespace. Unless of course it can be configured to deploy the resources into its own operator namespace, where it does have access (I still think this is a nice option as we then don't scatter locust pods across namespaces).

Adding a cluster role sounds like an easy win and will unblock our use case, but I do hope you consider being able to control where the locust resources are deployed.

thanks @AbdelrhmanHamouda

@AbdelrhmanHamouda
Copy link
Owner

@M4C4R i'm going to push a change to this within this week or the one after

@AbdelrhmanHamouda AbdelrhmanHamouda added the Enhancement New feature or request label Sep 10, 2024
@AbdelrhmanHamouda AbdelrhmanHamouda linked a pull request Sep 16, 2024 that will close this issue
@AbdelrhmanHamouda
Copy link
Owner

this is now in 0.10.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants