-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Scaler Metric Extensibility #194
Comments
If I understand correctly, you have 100s of SQS queues but a deployment may contain pods that listen to a group of those queues? And what you are hoping for is a way to feed KEDA with some custom metric that could be prometheus query or could just be some API signature which would be used to create the custom metric adapter and scale the pods? The simplest way that KEDA would "just work" would be if each SQS queue was associated a single pod / deployment. Then the We also plan to enable prometheus support to drive more custom scaling (#156 ) which could also be an option. I'm interested to see what thoughts are though as I wonder if there's some other extensible way we could let you define your specific event metric outside of prometheus. You could write your own custom metrics adapter on kubernetes but ideally even keda could make that "easier" |
This somewhat feels like what we are changing on Promitor as well - tomkerkhove/promitor#513 We used to have a 1-on-1 mapping from resource to metric, in Keda's case scaling action, where it would be easier to manage if you could have an n-to-1 mapping instead. In the case of Azure Storage Queue we have the following as of today: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: azure-queue-scaledobject
namespace: default
labels:
deploymentName: azurequeue-function
spec:
scaleTargetRef:
deploymentName: azurequeue-function
triggers:
- type: azure-queue
metadata:
# Required
queueName: functionsqueue
# Optional
connection: STORAGE_CONNECTIONSTRING_ENV_NAME # default AzureWebJobsStorage
queueLength: "5" # default 5 based on sample What I think @Renader is looking for is something similar to: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: azure-queue-scaledobject
namespace: default
labels:
deploymentName: azurequeue-function
spec:
scaleTargetRef:
deploymentName: azurequeue-function
triggers:
- type: azure-queue
metadata:
# Required
queueNames:
- functionsqueue-1
- functionsqueue-2
- functionsqueue-3
- functionsqueue-4
- functionsqueue-5
# Optional
connection: STORAGE_CONNECTIONSTRING_ENV_NAME # default AzureWebJobsStorage
queueLength: "5" # default 5 Where the setup is the same, but applicable for more queues. We could go even further and use Another approach would be to define a trigger for every queue but that's a lot of duplication. That said, you could argue that it has to be a custom metric indeed but would less ideal if this is your own custom metric and have to set up & manage Prometheus for this. That's just my 2 cents on the topic and what I've learned on the Promitor side. |
@tomkerkhove yes you are right - a queueNames would fulfill my need. @jeffhollan, the scenario is little bit different, it is not one deployment for per queue but one deployment for all that needs to scale for what ever the hottest/longest queue is. This is due to unique resource usage on the nodes. The pods get scheduled with an specific antiAffinity... But yeah, that is my implementation detail that I don't expect KEDA to solve. But it would be if KEDA delivers building blocks with which I could map my scenario. This would probably also serve my need as I then could implement a custom scaler that interacts directly with SQS or my deployment and do the use-case-specific logic. |
This helps. So if breaking this down:
Let me know if that's accurate and we can chat about it in our sync on Thursday |
Yes, exactly for my use case, that would be the best approach. Of course, I don't need Keda to do all the work for me. The main thing is that I can do it somehow. Thank you very much so far! |
Any update on this? |
@UNOPARATOR @AmithGanesh is working on the implementation of this proposal. It will allow you to specify multiple triggers in one ScaledObject. |
I believe this issue might be resolved with the current v2 release, but it is lacking the relevant documentation. |
We most definitely welcome PRs for docs on https://github.com/kedacore/keda-docs but if you need help; feel free to start a conversation. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
Has this issue been fixed ? I'm currently facing the same problem :( |
@eze-kiel you can define multiple triggers in one ScaledObject. |
Yes you're right, however in my case we have more than 200 queues for a single |
What kind of scaler are we talking about? If it is RabbitMq, there's regex support planned for the next release. |
It's indeed the RabbitMQ scaler! That's awesome, I'm looking forward to the next release 🎉 |
Signed-off-by: Tom Kerkhove <[email protected]>
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
Hi,
I'm in a situation where I have to monitor around 100 queues and scale a specific pod type accordingly. The queues are set up with AWS SQS but that is not too important. They are separated since SQS doesnt allow a filtered subscribe. Single queues can be considered hot queues for a time, but which queues are hot or cold changes frequently.
If one of the queues holds many a specific amount of messages, I need to scale the environment, even if all other queues are e.g. at 0. So I have to come up with a generic solution that aggregates the "load situation". This should clearly be part of my application. But how can I report this data back to KEDA?
I could probably set up a metric on Prometheus but yet I am not using it and this doesn't seem to be a straight forward solution. The simplest approach for me would be to provide an HTTP endpoint that exposes the current load information that KEDA frequently pulls. If I think about it briefly, it looks like a simple solution that makes KEDA very universally applicable, but maybe I am missing a few aspects.
What could be a solution to feed my "custom data" back to KEDA?
The text was updated successfully, but these errors were encountered: