Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: whitelist namespaces #70

Closed
ivanmartos opened this issue Jul 16, 2024 · 7 comments
Closed

feat: whitelist namespaces #70

ivanmartos opened this issue Jul 16, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@ivanmartos
Copy link

ivanmartos commented Jul 16, 2024

Issue

This is a feature request
Currently solution requires cluster wide access that can be problematic in enterprise organizations. Motivation is to get rid of cluster-wide access requirement and instead provide list of namespaces that should be monitored

Problem to solve

Add possibility to reduce required permissions from cluster scope to namespace scoped

Further details

Proposal

Currently there is a --namespaces option. This could be replaced by a list of namespaces that should be tracked. If this option would not be specified, current solution would still work (cluster wide)

Who can address the issue

Other links/references

@ivanmartos ivanmartos changed the title Feature - allow list of namespaces Feature - whitelist namespaces Jul 16, 2024
@ivanmartos ivanmartos changed the title Feature - whitelist namespaces feat: whitelist namespaces Jul 16, 2024
@Fovty Fovty added the enhancement New feature or request label Jul 16, 2024
@samuel-esp
Copy link
Collaborator

samuel-esp commented Jul 16, 2024

@ivanmartos thinking out loud, in order to monitoring a sub-range of namespaces the refactoring could be something like that:

1 Case: Infrastructural KubeDownscaler (AS-IS)

  • Deployed in the same way it is deployed right now. Cluster Wide RBAC
  • The user will still be able to choose which namespaces the tool should monitor

2 Case: Constrainted KubeDownscaler

  • Deployed with the same ClusterRole but with many RoleBindings
  • The user will only be able to monitor the namespaces allowed by the RoleBindings

For the code:

  1. We should refactor the --namespace parameter to include a list of namespaces (cmd.py and scaler.py)
  2. Create a boolean variable on the Helm Chart to be able to handle the new use case (constrainted-downscaler: true || false)
  3. Create a list of monitor-namespaces inside a list in the Helm Chart (constrainted-namespaces: [])
  4. If constrainted-downscaler is true, the helm chart will create a RoleBinding for each namespace inside constrainted-namespaces list

Will require a bit of error handling inside the business logic but might be doable

@samuel-esp
Copy link
Collaborator

samuel-esp commented Jul 16, 2024

Could be even easier using the already present --excluded-namespaces using a regex with reverse logic. In this case the steps to be implemented will only be these:

  1. Create a boolean variable on the Helm Chart to be able to handle the new use case (constrainted-downscaler: true || false)
  2. Create a list of monitor-namespaces inside a list in the Helm Chart (constrainted-namespaces: [])
  3. If constrainted-downscaler is true, the helm chart will create a RoleBinding for each namespace inside constrainted-namespaces list

@ivanmartos
Copy link
Author

Hey @samuel-esp
Thx for comments and inputs. Thinking out loud here:
I guess variable on helm chart constrainted-downscale is not needed on helm chart, cause if list of ``--namespaces would be specified, you already know that py-kube-downscaler should work in _constrained_ mode. Same in reverse - if variable--namespaces` is not present, then cluster wide (default) mode should happen
However having a boolean var `constrainted-downscaler: true || false` would be beneficial in the runtime, cause it would make logic choices much easier to understand and code easier to read 👍

Also I am not sure whether replicating logic as in --excluded-namespaces would work for this issue, because without cluster wide access, py-kube-downscaler will fail on first invocation of https://github.com/caas-team/py-kube-downscaler/blob/main/kube_downscaler/scaler.py#L125)
In that invocation namespace argument is not supplied, thus namespaces are listed with argument of pykube.all and that requires clusted wide access and this call will fail as unauthorized so you wont even get to filter out namespaces by the regex that could be added via new attribute --included-namespaces. Nevertheless I think that would still be a good feature that could be added - to filter out namespaces by regex in whitelisting manner (as opposed to blacklisting manner via --excluded-namespaces), however that would not solve the problem of this issue - not having cluster wide access :)

@samuel-esp
Copy link
Collaborator

samuel-esp commented Jul 17, 2024

Hi @ivanmartos,

I studied a bit more in depth the situation and the code, I think it is possible to restrict the access to punctual namespaces.I I will share my finds below

First of all

 --namespace

: Restrict the downscaler to work only in a single namespace (default: all namespaces). This is mainly useful for deployment scenarios where the deployer of py-kube-downscaler only has access to a given namespace (instead of cluster access). If used simultaneously with --exclude-namespaces, none is applied.

--exclude-namespaces

: Exclude namespaces from downscaling (list of regex patterns, default: kube-system), can also be configured via environment variable EXCLUDE_NAMESPACES. If used simultaneously with --namespace, none is applied.

Unfortunately these statements in the documentation are not true (probably the code was refactored in the past but the documentation was not). Basically you can use these arguments at the same time, the will only conflict if the --namespace name is included as well in --exclude-namespaces scope

In order to implement the new feature, it is needed to

  • constrainted-downscaler is nedeed inside the chart because, if this is true the Chart should create Roles and RoleBindings instead of a single ClusterRole and ClusterRoleBinding.
  • The "Downscale Jobs" feature needs to be refactored because right now ClusterWide access is needed to deploy CRDs
  • --constrainted-downscaler will be needed as well inside the code as a runtime argument, because we need to separate the API Calls to the Api Server. Right now the API Calls suppose you have Cluster Wide access (think about something like kubectl get resource -A or kubectl get ns) , so we should implement parallel API Calls that target only the resource inside the targeted namespaces

I'll try to see if I can design the update

@samuel-esp
Copy link
Collaborator

#71 this pull request aims to help the introduction of this feature

@samuel-esp
Copy link
Collaborator

#73 the feature should be ready!

@samuel-esp
Copy link
Collaborator

The pull request was merged, issue can be closed

@Fovty Fovty closed this as completed Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants