-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Memory Leak #6
Comments
I'm using the docker container myself. I will check out of there is a leak as well. 700mb is definitely too much. Can you call the metric endpoint and post the golang stats from it please. |
Sure but I don't think it's going to be all that useful since I restarted both running exporters earlier
I can leave it running for a couple of days and post it again..
|
Hi, Thank you. I will compare it to my instances. Another thing. There was a new a release on 26.june. Are you using this version? It fixed a memory leak. Perhaps just try docker pull kbudde/rabbitmq-exporter to check if there is a newer version. It should log as well the version number and build date during start up if it is the latest version BR Kris |
Hi, I was checking my data without success. No leakage in the docker container image. Can you verify that you are using the latest release? |
so i've pulled the repo a couple of days ago (on the 7th) and re-built a docker image from it (make promu, make docker with a slightly modified Dockerfile; sets various ENV variables like a different port and sort of company test-default-credentials for rabbits etc) - it has been running over the weekend and here's what that looks like after 3 days (its around 900MB)
|
I'm still trying to find out the root cause of your problem. It's hard. I created some queues (2000). Prometheus was configured to fetch the data every second. I've noticed a high cpu load of the exporter (90%) and an increasing number of goroutines as the fetch requests were pilling up. What I've noticed in this case is that retrieving the metrics takes a long time. What happens if you call the endpoint manually (/metrics). It should return faster than your fetch intervall. Can you please check the log of the exporter as well. It should be just "Metrics updated successfully" and the timestamp. The time between two log entries should be as long as the prometheus fetch intervall. How many queues/connections/exchanges do you have? How often is the data fetched? Another thing I've noticed. I'm using the default http client which is not good as there is no timeout by default. I will prepare a new version with fetch timeout. The last thing. Are you using a (ha)proxy between exporter and rabbitmq or prometheus. If yes can you post which one and the configuration for it. BR |
Hi, BR |
HI, @KekSfabrik I will close the issue for now. If you still have issues we have to check what we can do. BR Kris |
I've really only been using a static build in a Docker container (built with cgo and Dockerfile
FROM scratch
) and it seems to have a memory leak - I left it running for a week or so and it climbed to ~700MB of RAM usage (10x as much as RabbitMQ itself; Prometheus set to a 15s scrape interval). Sadly I'm not familiar enough with go to fix it myself and create a pull request.I'd also like to suggest some better log output than just this (TTY):
time="2016-07-01T05:11:34Z" level=info msg="Metrics updated successfully."
Maybe time taken for the scrape or just periodically reporting how often it was scraped in the last hour along with how long scrapes took min/max/avg or only reporting on errors.
Beside that: thank you for making this :)
The text was updated successfully, but these errors were encountered: