-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can you provide prometheus alert rules? #297
Comments
We don't actively add them given this is up to the end-user; we don't want to enforce things on end-users. However, if you have suggestions we do welcome PRs where we can add them, but commented out. |
@tomkerkhove |
The overview is available on https://keda.sh/docs/2.7/operate/prometheus/ |
@tomkerkhove Thanks. Why does KEDA only list one metric here? Btw, I am using the latest version of KEDA(2.7.2). From the screenshot, you can see the metric name is: keda_metrics_adapter_scaler_errors_total (In the document, the metric name is keda_metrics_adapter_scaler_error_totals) From my metric, I can get result, but if I queried keda_metrics_adapter_scaler_error_totals, I get nothing. Besides this, I also tried to query three other metrics below on Prometheus, did not get any result.
My KEDA setting for Prometheus
Is there anything I need to modify? Please help check. Thanks |
@tomkerkhove Hi, do you have any idea about what I commented above? |
This might be a silly question, but does your cluster have any ScaledObject resources? Because if it does not, then that might explain why they are missing. (sorry for the slow response) |
@tomkerkhove Thanks but the reply made me confused. I do have scaledobject resources in my env. |
That is odd, can this be related to kedacore/keda#3554 @JorTurFer ? |
I don't think so, that issue registers the metric with 0 as value, but the metric is registered. are you checking the metrics server (not the operator) in the port 9022, right? |
@JorTurFer @tomkerkhove
Then I run port-forward to check the metrics on metrics-apiserver
|
hum |
@JorTurFer I only have 1 metrics server pod as you can see below.
Based on your suggestion, I rebooted metric pod, then checked the http://127.0.0.1:9022/metrics, got the same result
|
really weird... |
@JorTurFer At this moment, we only monitor CPU and Mem. I dont have any other triggers for now. Actually, we are using the combination of CPU+Mem together as trigger in our env for now. Here is a question hope you can answer: |
That's why you can't see any other metric, because they are not generated yet due to KEDA metrics server hasn't received any query, all the requests are done against Kubernetes metric server. When you use CPU/Memory scaler, KEDA basically create a "regular" HPA hitting to the "regular" metrics server (that's why Kubernetes metrics server is needed)
KEDA creates the HPA and exposes the metrics (except CPU and memory) and is the HPA Controller who manages the autoscaling,, so basically we don't have any change there. Why do you think that the CPU usage was low? I mean, do you have all the usage monitored? Another important thing is that the threshold is not a boundary, it's the desired value. I mean, the HPA Controller will try to be closest as possible to that value, not scaling out/in automatically when the value changes. |
I found there is zero rules in the values. yaml. Can you provide some rules for promtheus monitoring purpose?
Can you provide more rules? Is this one enough in values.yaml?
The text was updated successfully, but these errors were encountered: