-
Notifications
You must be signed in to change notification settings - Fork 1.1k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
autoscaling.keda.sh/paused-replicas Automatic scaling was not restored after test deletion in 1.8.2 #6062
Comments
KEDA v2.8 is no longer supported; can you please try with KEDA 2.15? |
Unfortunately, our internal cluster version is v1.21.1. According to the official recommendation of keda https://keda.sh/docs/2.15/operate/cluster/#kubernetes-compatibility, we use keda’s v2.8.2. I also tested it. After version 2.15 of keda, the program will panic because there is no v2 version of hpa, which can be understood as being incompatible with 1.21.1 |
@tomkerkhove https://keda.sh/docs/2.8/concepts/scaling-deployments/#pause-autoscaling,Didn’t the official support paused-replicas in v2.8.2? So is this a bug? |
KEDA 2.8.2 is from Jan 2023; I doubt that we will issue a patch for it: https://github.com/kedacore/keda/releases/tag/v2.8.2 |
@tomkerkhove So in the 2.8.2 version I am currently testing, adding paused-replicas is in line with expectations, but deleting paused-replicas is not in line with expectations. Is it really a bug? |
@tomkerkhove The purpose of trying to determine whether this problem is a bug is to clarify whether it is caused by my non-compliant operation or whether it is a defect of 2.8.2, because I see that the official clearly shows this function to users. |
I remember some issues related with the pause feature in the beginning due to errors on (internal) selectors. To test if this is your case, restart the KEDA operator pod, if it fixes the HPA, you are affected by these problems |
@JorTurFer Restart keda-metrics-apiserver or keda-operator pod? |
keda-operator pod |
After deleting autoscaling.keda.sh/paused-replicas, delete the pod of keda-operator and restart it, hpa resumes Automatic scaling.But this shouldn't be a routine process, is there any other solution? I also saw the design of autoscaling.keda.sh/paused, can it solve my problem? |
This bug is already solved in latest versions. I fully understand that you are attached to v2.8 as you're using k8s 1.21, but we won't cut any extra release for v2.8 to backport the fix as this version has 1,5 years and k8s 1.21 is out of support too. The only solution to the bug of this feature is using an updated version of KEDA.
Do you mean to modify the current instances of a pod manually? yeah, probably it can. I use the pause feature when I want to disable or modify the amount of pods of a workload during manual operations |
@tomkerkhove |
the annotation In your case, you have to use Manual changes over the replica count directly in the workload will always be overridden. If you want to change the amount of replicas, you have to update the annotation with the required value |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Report
autoscaling.keda.sh/paused-replicas Automatic scaling was not restored after test deletion in 1.8.2
Expected Behavior
autoscaling.keda.sh/paused-replicas Automatic scaling was restored after test deletion in 1.8.2
Actual Behavior
autoscaling.keda.sh/paused-replicas Automatic scaling was not restored after test deletion in 1.8.2
Steps to Reproduce the Problem
1.kubectl apply --server-side -f https://github.com/kedacore/keda/releases/download/v2.8.2/keda-2.8.2.yaml
2.add annotations autoscaling.keda.sh/paused-replicas: '11'
3.delete annotations autoscaling.keda.sh/paused-replicas: '11'
4.hpa maxReplicaCount and minReplicaCount not change
Logs from KEDA operator
KEDA Version
< 2.11.0
Kubernetes Version
Other
Platform
None
Scaler Details
cron
Anything else?
The text was updated successfully, but these errors were encountered: