Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Scaler Metric Extensibility #194

Closed
Renader opened this issue May 13, 2019 · 17 comments
Closed

[Question] Scaler Metric Extensibility #194

Renader opened this issue May 13, 2019 · 17 comments
Labels
needs-discussion stale All issues that are marked as stale due to inactivity

Comments

@Renader
Copy link

Renader commented May 13, 2019

Hi,

I'm in a situation where I have to monitor around 100 queues and scale a specific pod type accordingly. The queues are set up with AWS SQS but that is not too important. They are separated since SQS doesnt allow a filtered subscribe. Single queues can be considered hot queues for a time, but which queues are hot or cold changes frequently.

If one of the queues holds many a specific amount of messages, I need to scale the environment, even if all other queues are e.g. at 0. So I have to come up with a generic solution that aggregates the "load situation". This should clearly be part of my application. But how can I report this data back to KEDA?

I could probably set up a metric on Prometheus but yet I am not using it and this doesn't seem to be a straight forward solution. The simplest approach for me would be to provide an HTTP endpoint that exposes the current load information that KEDA frequently pulls. If I think about it briefly, it looks like a simple solution that makes KEDA very universally applicable, but maybe I am missing a few aspects.

What could be a solution to feed my "custom data" back to KEDA?

@jeffhollan
Copy link
Member

If I understand correctly, you have 100s of SQS queues but a deployment may contain pods that listen to a group of those queues? And what you are hoping for is a way to feed KEDA with some custom metric that could be prometheus query or could just be some API signature which would be used to create the custom metric adapter and scale the pods?

The simplest way that KEDA would "just work" would be if each SQS queue was associated a single pod / deployment. Then the ScaledObject could map to that queue (#138) and each pod could scale only if its coorelated queues had work.

We also plan to enable prometheus support to drive more custom scaling (#156 ) which could also be an option.

I'm interested to see what thoughts are though as I wonder if there's some other extensible way we could let you define your specific event metric outside of prometheus. You could write your own custom metrics adapter on kubernetes but ideally even keda could make that "easier"

@tomkerkhove
Copy link
Member

This somewhat feels like what we are changing on Promitor as well - tomkerkhove/promitor#513

We used to have a 1-on-1 mapping from resource to metric, in Keda's case scaling action, where it would be easier to manage if you could have an n-to-1 mapping instead.

In the case of Azure Storage Queue we have the following as of today:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: azure-queue-scaledobject
  namespace: default
  labels:
    deploymentName: azurequeue-function
spec:
  scaleTargetRef:
    deploymentName: azurequeue-function
  triggers:
  - type: azure-queue
    metadata:
      # Required
      queueName: functionsqueue
      # Optional
      connection: STORAGE_CONNECTIONSTRING_ENV_NAME # default AzureWebJobsStorage
      queueLength: "5" # default 5

based on sample

What I think @Renader is looking for is something similar to:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: azure-queue-scaledobject
  namespace: default
  labels:
    deploymentName: azurequeue-function
spec:
  scaleTargetRef:
    deploymentName: azurequeue-function
  triggers:
  - type: azure-queue
    metadata:
      # Required
      queueNames:
      - functionsqueue-1
      - functionsqueue-2
      - functionsqueue-3
      - functionsqueue-4
      - functionsqueue-5
      # Optional
      connection: STORAGE_CONNECTIONSTRING_ENV_NAME # default AzureWebJobsStorage
      queueLength: "5" # default 5

Where the setup is the same, but applicable for more queues. We could go even further and use functionsqueue-* but that's more complex.

Another approach would be to define a trigger for every queue but that's a lot of duplication.

That said, you could argue that it has to be a custom metric indeed but would less ideal if this is your own custom metric and have to set up & manage Prometheus for this.

That's just my 2 cents on the topic and what I've learned on the Promitor side.

@Renader
Copy link
Author

Renader commented May 14, 2019

@tomkerkhove yes you are right - a queueNames would fulfill my need.

@jeffhollan, the scenario is little bit different, it is not one deployment for per queue but one deployment for all that needs to scale for what ever the hottest/longest queue is. This is due to unique resource usage on the nodes. The pods get scheduled with an specific antiAffinity... But yeah, that is my implementation detail that I don't expect KEDA to solve. But it would be if KEDA delivers building blocks with which I could map my scenario.

This would probably also serve my need as I then could implement a custom scaler that interacts directly with SQS or my deployment and do the use-case-specific logic.
#184

@jeffhollan
Copy link
Member

This helps. So if breaking this down:

  • Best case: we allow you to listen to multiple queues / event sources for a single ScaledObject and KEDA will optimize for the event source that requires the most scaling
  • Also ok, but not best: Allow a dynamic custom scaler where you could write the above logic in your own custom logic

Let me know if that's accurate and we can chat about it in our sync on Thursday

@Renader
Copy link
Author

Renader commented May 14, 2019

Yes, exactly for my use case, that would be the best approach. Of course, I don't need Keda to do all the work for me. The main thing is that I can do it somehow. Thank you very much so far!

@UNOPARATOR
Copy link

Any update on this?
I would really love to see multiple queue names support for ScaledObject (in my case for RabbitMq).

@zroubalik
Copy link
Member

@UNOPARATOR @AmithGanesh is working on the implementation of this proposal. It will allow you to specify multiple triggers in one ScaledObject.

@UNOPARATOR
Copy link

I believe this issue might be resolved with the current v2 release, but it is lacking the relevant documentation.
This comment from issue #476 provides a hopefully working sample.
But I would love to see an official documentation about how to use multiple trigger properly before trying (although from the comment link I found, it seems pretty straight-forward). ;)

@tomkerkhove
Copy link
Member

We most definitely welcome PRs for docs on https://github.com/kedacore/keda-docs but if you need help; feel free to start a conversation.

@stale
Copy link

stale bot commented Oct 13, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Oct 13, 2021
@eze-kiel
Copy link

Has this issue been fixed ? I'm currently facing the same problem :(

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Oct 18, 2021
@zroubalik
Copy link
Member

@eze-kiel you can define multiple triggers in one ScaledObject.

@eze-kiel
Copy link

Yes you're right, however in my case we have more than 200 queues for a single ScaledObject, so I'm looking for another solution (if there is one 😅)
Thanks for your answer!

@zroubalik
Copy link
Member

What kind of scaler are we talking about? If it is RabbitMq, there's regex support planned for the next release.

@eze-kiel
Copy link

It's indeed the RabbitMQ scaler! That's awesome, I'm looking forward to the next release 🎉

preflightsiren pushed a commit to preflightsiren/keda that referenced this issue Nov 7, 2021
@stale
Copy link

stale bot commented Dec 17, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Dec 17, 2021
@stale
Copy link

stale bot commented Dec 25, 2021

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Dec 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-discussion stale All issues that are marked as stale due to inactivity
Projects
None yet
Development

No branches or pull requests

6 participants