-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ScaledObject downscales deployment to 0 replicas outside specified timeframe in cron trigger even when cpu trigger should keep it running #6057
Comments
This is similar or duplicate to #5620 which is closed due to inactivity |
This issue is very difficult to solve with the current implementation of CPU scaler. The biggest problem is that resource-metrics based scalers (like CPU and memory) do not have |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
not stale |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
not stale |
Report
On time XX:33 it starts, when cpu is being used it scales up, but then even if more than 1 replicas are here on XX:55 its just scaling it all down to 0 replicas.
The ScaledObject:
Expected Behavior
After reading this: https://keda.sh/docs/2.13/reference/faq/#using-multiple-triggers-for-the-same-scale-target
It seems like the deployment shouldn't scale down until the CPU trigger is keeping it up.
It should only downscale to 0 when there is low CPU usage and we are not inside timeframe defined for cron trigger.
Actual Behavior
After timeframe of cron trigger is over, instantly deployment scales down to 0.
Steps to Reproduce the Problem
Logs from KEDA operator
No response
KEDA Version
2.13.1
Kubernetes Version
1.29
Platform
Amazon Web Services
Scaler Details
cpu, cron
Anything else?
No response
The text was updated successfully, but these errors were encountered: