Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added ScaledObject Support For Downscaler/Downtime-Replicas Annotation #100

Merged
merged 9 commits into from
Oct 16, 2024

Conversation

samuel-esp
Copy link
Collaborator

@samuel-esp samuel-esp commented Oct 12, 2024

Motivation

Actually ScaledObject doesn't support "downscaler/downtime-replicas" annotation. See issue #96

Changes

  • Refactored Keda ScaledObject resource class in order to support "downscaler/downtime-replicas" annotation
  • Refactored scaler.py class in order to support "downscaler/downtime-replicas" annotation for Keda
  • Made Keda ScaledObject more clear
  • Added and Refactored Keda Unit Tests
  • Refactored Docs

Tests done

  • Unit Tests

TODO

  • I've assigned myself to this PR

@samuel-esp samuel-esp changed the title feat: added scaledobject support for downscaler/downtime-replicas Added ScaledObject Support For Downscaler/Downtime-Replicas Oct 12, 2024
@samuel-esp samuel-esp self-assigned this Oct 12, 2024
@samuel-esp samuel-esp added documentation Improvements or additions to documentation enhancement New feature or request feature labels Oct 12, 2024
@samuel-esp samuel-esp changed the title Added ScaledObject Support For Downscaler/Downtime-Replicas Added ScaledObject Support For Downscaler/Downtime-Replicas Annotation Oct 12, 2024
@samuel-esp samuel-esp marked this pull request as ready for review October 13, 2024 19:45
Copy link
Member

@jonathan-mayer jonathan-mayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than these two and me finding it a bit weird needing to specify the downtime replicas on the scaled objects deployments, ... this looks good to me

kube_downscaler/resources/keda.py Show resolved Hide resolved
kube_downscaler/scaler.py Outdated Show resolved Hide resolved
@jonathan-mayer jonathan-mayer linked an issue Oct 13, 2024 that may be closed by this pull request
Co-authored-by: Jonathan Mayer <[email protected]>
@samuel-esp
Copy link
Collaborator Author

samuel-esp commented Oct 14, 2024

Other than these two and me finding it a bit weird needing to specify the downtime replicas on the scaled objects deployments, ... this looks good to me

I see this concern and I strongly agree with you on a theoretical level, ScaledObject is basically a custom HPA and doesn't hold the concept of "replicas". However since the ScaledObject controls the Deployment/StatefulSet through the pause annotation, it will invalid the "downscaler/downtime-replicas" if set at Deployment/StatefulSet level. It must be clear to the user, as stated inside the documentation, that "downscaler/downtime-replicas" value should be the same inside both objects, otherwise it won't make sense

@jonathan-mayer
Copy link
Member

Other than these two and me finding it a bit weird needing to specify the downtime replicas on the scaled objects deployments, ... this looks good to me

I see this concern and I strongly agree with you on a theoretical level, ScaledObject is basically a custom HPA and doesn't hold the concept of "replicas". However since the ScaledObject controls the Deployment/StatefulSet through the pause annotation, it will invalid the "downscaler/downtime-replicas" if set at Deployment/StatefulSet level. It must be clear to the user, as stated inside the documentation, that "downscaler/downtime-replicas" value should be the same inside both objects, otherwise it won't make sense

I am not too familiar with ScaledObjects but I think it would make more sense if we "excluded" the Workloads that are being managed by the ScaledObject and let the ScaledObject scale them according to the pause annotation we set on it when scaling down.

@samuel-esp
Copy link
Collaborator Author

Other than these two and me finding it a bit weird needing to specify the downtime replicas on the scaled objects deployments, ... this looks good to me

I see this concern and I strongly agree with you on a theoretical level, ScaledObject is basically a custom HPA and doesn't hold the concept of "replicas". However since the ScaledObject controls the Deployment/StatefulSet through the pause annotation, it will invalid the "downscaler/downtime-replicas" if set at Deployment/StatefulSet level. It must be clear to the user, as stated inside the documentation, that "downscaler/downtime-replicas" value should be the same inside both objects, otherwise it won't make sense

I am not too familiar with ScaledObjects but I think it would make more sense if we "excluded" the Workloads that are being managed by the ScaledObject and let the ScaledObject scale them according to the pause annotation we set on it when scaling down.

That's a good approach as well, I included this piece of advice inside the usage notes of the docs in the ScaledObject section

Copy link
Member

@jonathan-mayer jonathan-mayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible we should consider to automatically exclude the workloads managed by the ScaledObject.

kube_downscaler/scaler.py Outdated Show resolved Hide resolved
@samuel-esp
Copy link
Collaborator Author

If possible we should consider to automatically exclude the workloads managed by the ScaledObject.

I'll try to address this inside one of the next releases

@samuel-esp samuel-esp merged commit 7c52986 into caas-team:main Oct 16, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Downscaling with Keda ScaledObjects not working
2 participants