Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to make HPA created by KEDA to work #6155

Closed
rishabhgargg opened this issue Sep 11, 2024 Discussed in #6154 · 1 comment
Closed

Unable to make HPA created by KEDA to work #6155

rishabhgargg opened this issue Sep 11, 2024 Discussed in #6154 · 1 comment

Comments

@rishabhgargg
Copy link

rishabhgargg commented Sep 11, 2024

Discussed in #6154

Originally posted by rishabhgargg September 12, 2024
I am using KEDA 2.15.1 in EKS (v1.24.17)
I have installed KEDA from helm chart from kedacore-charts version 2.15.1

I already had prometheus stack installed, so I deleted the 'apiserver.yaml' from the templates/metrics-server directory (Please let me know if there is a better approach than this to make the two of these work together in same kubernetes cluster)

I also disabled certificate generation by setting certificates.autoGenerated: false in values.yaml.

Now, the services are up and running perfectly fine without any errors (shown below).
image

I am deploying a ScaledObject that uses promtheus scaler to scale my StatefulSet. The YAML for that is:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: set1
  namespace: set1
  labels:
    deploymentName: set1-cloud
  annotations:
    scaledobject.keda.sh/transfer-hpa-ownership: "true"  # Transfer HPA ownership to KEDA
spec:
  maxReplicaCount: 64
  pollingInterval: 600
  advanced:
    horizontalPodAutoscalerConfig:
      name: set1  # Name of the HPA resource
  scaleTargetRef:
    kind: StatefulSet
    name: set1-cloud
  triggers:
    - type: prometheus  # Use Prometheus trigger
      metadata:
        serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090  # URL for Prometheus service in cluster
        metricName: pod_replicas  # Use the pod_replicas metric
        threshold: "1"  # Set threshold to 1 (we will rely on the query to dictate the scale)
        query: pod_replicas # Directly query the pod_replicas metric value

Here, I was trying to achieve that, whatever the value metric returns, that many pods must be running for that statefulset.

But, after deploying this, the HPA created on doing kubectl describe hpa gives this:

PS C:\Users\rishabh.garg> kubectl describe hpa set1 -n set1
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name:                                      set1
Namespace:                                 set1
Labels:                                    app.kubernetes.io/managed-by=keda-operator
                                           app.kubernetes.io/name=set1
                                           app.kubernetes.io/part-of=set1
                                           app.kubernetes.io/version=2.15.1
                                           deploymentName=set1-cloud
                                           scaledobject.keda.sh/name=set1
Annotations:                               <none>
CreationTimestamp:                         Wed, 11 Sep 2024 22:27:35 +0530
Reference:                                 StatefulSet/set1-cloud
Metrics:                                   ( current / target )
  "s0-prometheus" (target average value):  34m / 1
Min replicas:                              1
Max replicas:                              64
StatefulSet pods:                          16 current / 16 desired
Conditions:
  Type            Status  Reason                   Message
  ----            ------  ------                   -------
  AbleToScale     True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive   False   FailedGetExternalMetric  the HPA was unable to compute the replica count: unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLab
els:map[string]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could
 not find the metric s0-prometheus for
  ScalingLimited  True    TooFewReplicas           the desired replica count is less than the minimum replica count
Events:
  Type     Reason                        Age                    From                       Message
  ----     ------                        ----                   ----                       -------
  Warning  FailedComputeMetricsReplicas  42m (x12 over 44m)     horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get s0-prometheus
external metric: unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSe
lectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find the metric s0-prometheus for
  Warning  FailedGetExternalMetric       4m35s (x161 over 44m)  horizontal-pod-autoscaler  unable to get external metric set1/s0-prometheus/&LabelSelector{MatchLabels:map[s
tring]string{scaledobject.keda.sh/name: set1,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find
 the metric s0-prometheus for
PS C:\Users\rishabh.garg>

I am not able to understand that why it is trying to query s0-prometheus metric instead of pod_replicas metric, which actually returns a value (successfully tested in prometheus UI)?

Right after applying ScaledObject YAML, when I try to see via kubectl edit ScaledObject I get this, in which somehow in externalMetricNames it mentions s0-prometheus

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"keda.sh/v1alpha1","kind":"ScaledObject","metadata":{"annotations":{"scaledobject.keda.sh/transfer-hpa-ownership":"true"},"labels":{"deploymentName":"set1-cloud"},"name":"set1","namespace":"set1"},"spec":{"advanced":{"horizontalPodAutoscalerConfig":{"name":"set1"}},"maxReplicaCount":64,"pollingInterval":600,"scaleTargetRef":{"kind":"StatefulSet","name":"set1-cloud"},"triggers":[{"metadata":{"metricName":"pod_replicas","query":"pod_replicas","serverAddress":"http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090","threshold":"1"},"type":"prometheus"}]}}
    scaledobject.keda.sh/transfer-hpa-ownership: 'true'
  creationTimestamp: '2024-09-11T20:51:33Z'
  finalizers:
    - finalizer.keda.sh
  generation: 2
  labels:
    deploymentName: set1-cloud
    scaledobject.keda.sh/name: set1
  managedFields:
    - apiVersion: keda.sh/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"finalizer.keda.sh": {}
          f:labels:
            f:scaledobject.keda.sh/name: {}
        f:spec:
          f:advanced:
            f:scalingModifiers: {}
      manager: keda
      operation: Update
      time: '2024-09-11T20:51:33Z'
    - apiVersion: keda.sh/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:conditions: {}
          f:externalMetricNames: {}
          f:hpaName: {}
          f:lastActiveTime: {}
          f:originalReplicaCount: {}
          f:scaleTargetGVKR:
            .: {}
            f:group: {}
            f:kind: {}
            f:resource: {}
            f:version: {}
          f:scaleTargetKind: {}
      manager: keda
      operation: Update
      subresource: status
      time: '2024-09-11T20:51:33Z'
    - apiVersion: keda.sh/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:scaledobject.keda.sh/transfer-hpa-ownership: {}
          f:labels:
            .: {}
            f:deploymentName: {}
        f:spec:
          .: {}
          f:advanced:
            .: {}
            f:horizontalPodAutoscalerConfig:
              .: {}
              f:name: {}
          f:maxReplicaCount: {}
          f:pollingInterval: {}
          f:scaleTargetRef:
            .: {}
            f:kind: {}
            f:name: {}
          f:triggers: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: '2024-09-11T20:51:33Z'
  name: set1
  namespace: set1
  resourceVersion: '410711161'
  uid: 4ddb0c99-666d-4d58-80a5-fc3018c82554
  selfLink: /apis/keda.sh/v1alpha1/namespaces/set1/scaledobjects/set1
status:
  conditions:
    - message: ScaledObject is defined correctly and is ready for scaling
      reason: ScaledObjectReady
      status: 'True'
      type: Ready
    - status: Unknown
      type: Active
    - status: Unknown
      type: Fallback
    - status: Unknown
      type: Paused
  externalMetricNames:
    - s0-prometheus
  hpaName: set1
  lastActiveTime: '2024-09-11T20:51:33Z'
  originalReplicaCount: 16
  scaleTargetGVKR:
    group: apps
    kind: StatefulSet
    resource: statefulsets
    version: v1
  scaleTargetKind: apps/v1.StatefulSet
spec:
  advanced:
    horizontalPodAutoscalerConfig:
      name: set1
    scalingModifiers: {}
  maxReplicaCount: 64
  pollingInterval: 600
  scaleTargetRef:
    kind: StatefulSet
    name: set1-cloud
  triggers:
    - metadata:
        metricName: pod_replicas
        query: pod_replicas
        serverAddress: >-
          http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090
        threshold: '1'
      type: prometheus
```</div>
@zroubalik
Copy link
Member

Please don't open issue and a discussion for the same problem 🙏

@github-project-automation github-project-automation bot moved this from To Triage to Ready To Ship in Roadmap - KEDA Core Sep 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Ready To Ship
Development

No branches or pull requests

2 participants