You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by rishabhgargg September 12, 2024
I am using KEDA 2.15.1 in EKS (v1.24.17)
I have installed KEDA from helm chart from kedacore-charts version 2.15.1
I already had prometheus stack installed, so I deleted the 'apiserver.yaml' from the templates/metrics-server directory (Please let me know if there is a better approach than this to make the two of these work together in same kubernetes cluster)
I also disabled certificate generation by setting certificates.autoGenerated: false in values.yaml.
Now, the services are up and running perfectly fine without any errors (shown below).
I am deploying a ScaledObject that uses promtheus scaler to scale my StatefulSet. The YAML for that is:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: set1
namespace: set1
labels:
deploymentName: set1-cloud
annotations:
scaledobject.keda.sh/transfer-hpa-ownership: "true" # Transfer HPA ownership to KEDA
spec:
maxReplicaCount: 64
pollingInterval: 600
advanced:
horizontalPodAutoscalerConfig:
name: set1 # Name of the HPA resource
scaleTargetRef:
kind: StatefulSet
name: set1-cloud
triggers:
- type: prometheus # Use Prometheus trigger
metadata:
serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090 # URL for Prometheus service in cluster
metricName: pod_replicas # Use the pod_replicas metric
threshold: "1" # Set threshold to 1 (we will rely on the query to dictate the scale)
query: pod_replicas # Directly query the pod_replicas metric value
Here, I was trying to achieve that, whatever the value metric returns, that many pods must be running for that statefulset.
But, after deploying this, the HPA created on doing kubectl describe hpa gives this:
PS C:\Users\rishabh.garg> kubectl describe hpa set1 -n set1
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: set1
Namespace: set1
Labels: app.kubernetes.io/managed-by=keda-operator
app.kubernetes.io/name=set1
app.kubernetes.io/part-of=set1
app.kubernetes.io/version=2.15.1
deploymentName=set1-cloud
scaledobject.keda.sh/name=set1
Annotations: <none>
CreationTimestamp: Wed, 11 Sep 2024 22:27:35 +0530
Reference: StatefulSet/set1-cloud
Metrics: ( current / target )
"s0-prometheus" (target average value): 34m / 1
Min replicas: 1
Max replicas: 64
StatefulSet pods: 16 current / 16 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetExternalMetric the HPA was unable to compute the replica count: unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLab
els:map[string]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could
not find the metric s0-prometheus for
ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 42m (x12 over 44m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get s0-prometheus
external metric: unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSe
lectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find the metric s0-prometheus for
Warning FailedGetExternalMetric 4m35s (x161 over 44m) horizontal-pod-autoscaler unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLabels:map[s
tring]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find
the metric s0-prometheus for
PS C:\Users\rishabh.garg>
I am not able to understand that why it is trying to query s0-prometheus metric instead of pod_replicas metric, which actually returns a value (successfully tested in prometheus UI)?
Right after applying ScaledObject YAML, when I try to see via kubectl edit ScaledObject I get this, in which somehow in externalMetricNames it mentions s0-prometheus
Discussed in #6154
Originally posted by rishabhgargg September 12, 2024
I am using KEDA 2.15.1 in EKS (v1.24.17)
I have installed KEDA from helm chart from kedacore-charts version 2.15.1
I already had prometheus stack installed, so I deleted the 'apiserver.yaml' from the templates/metrics-server directory (Please let me know if there is a better approach than this to make the two of these work together in same kubernetes cluster)
I also disabled certificate generation by setting
certificates.autoGenerated: false
invalues.yaml
.Now, the services are up and running perfectly fine without any errors (shown below).
I am deploying a ScaledObject that uses promtheus scaler to scale my StatefulSet. The YAML for that is:
Here, I was trying to achieve that, whatever the value metric returns, that many pods must be running for that statefulset.
But, after deploying this, the HPA created on doing
kubectl describe hpa
gives this:I am not able to understand that why it is trying to query
s0-prometheus
metric instead ofpod_replicas
metric, which actually returns a value (successfully tested in prometheus UI)?Right after applying ScaledObject YAML, when I try to see via
kubectl edit ScaledObject
I get this, in which somehow inexternalMetricNames
it mentionss0-prometheus
The text was updated successfully, but these errors were encountered: