-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak in Keda-operator with Kafka scaler #814
Comments
@jeli8-cor thanks for submitting the issue. Are you willing to help a little bit with tracking down the bug? Could you please ping me on slack and we can sync? |
Sure, I would love to help with that. How can I find you in slack? |
@jeli8-cor great, you can find me on kubernetes slack, #keda channel |
Following a few tests and debugs with the dear friend @zroubalik here, I'm updating about the issue: |
@jeli8-cor thanks a lot for the testing! We should speed up development (and release) of v2, in case we are not able to find the cause and proved a fix for this issue in v1. |
Adding note that this should be included in changelog, so we don't forget. |
also experienced the same memory leak issues using keda v1.4.1 with the redis list scaler, but I upgraded to v1.5.0 and looks like that resolved it |
@lallinger-arbeit would you mind sharing, which scalers (and how many of them) are you using? Thanks |
@zroubalik Yeah, we have 23 scaledobjects of which 17 use a kafka trigger and 6 a redis trigger |
Having only redis scalers and running on 2.0 I can confirm it is still existing for me. Will try 2.1 and if it still is a thing create some own issue on that as is don't see a reference to redis directly in this one. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
AFAIK this has not been resolved yet. But i will try to confirm this with the latest 2.4.0 version in the next few weeks, as i don't have the time at the moment |
Hi everyone,
I started to use keda with the kafka scaler, defined it pretty simple according to the example and after deploying it to production, I noticed that every 3 days the pod reaches the kubernetes limits and get OOM. The memory increasing constantly and I'm not really sure why.
This is the deployment description (I added few parameters such as limits, priorityClass and others):
The Heap starts at around 30-40M and rinse till almost 200M, jumps to 240 and up and get OOM and restarted by the kubernetes daemon set.
Steps to Reproduce the Problem
Specifications
The text was updated successfully, but these errors were encountered: