Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible Memory Leak #6

Closed
KekSfabrik opened this issue Jul 1, 2016 · 8 comments
Closed

Possible Memory Leak #6

KekSfabrik opened this issue Jul 1, 2016 · 8 comments
Assignees
Labels

Comments

@KekSfabrik
Copy link

I've really only been using a static build in a Docker container (built with cgo and Dockerfile FROM scratch) and it seems to have a memory leak - I left it running for a week or so and it climbed to ~700MB of RAM usage (10x as much as RabbitMQ itself; Prometheus set to a 15s scrape interval). Sadly I'm not familiar enough with go to fix it myself and create a pull request.

I'd also like to suggest some better log output than just this (TTY):
time="2016-07-01T05:11:34Z" level=info msg="Metrics updated successfully."

Maybe time taken for the scrape or just periodically reporting how often it was scraped in the last hour along with how long scrapes took min/max/avg or only reporting on errors.

Beside that: thank you for making this :)

@kbudde
Copy link
Owner

kbudde commented Jul 1, 2016

I'm using the docker container myself. I will check out of there is a leak as well. 700mb is definitely too much.

Can you call the metric endpoint and post the golang stats from it please.

@KekSfabrik
Copy link
Author

Sure but I don't think it's going to be all that useful since I restarted both running exporters earlier

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.000337196
go_gc_duration_seconds{quantile="0.25"} 0.00045423400000000003
go_gc_duration_seconds{quantile="0.5"} 0.000573033
go_gc_duration_seconds{quantile="0.75"} 0.0007699940000000001
go_gc_duration_seconds{quantile="1"} 0.003008751
go_gc_duration_seconds_sum 0.110275333
go_gc_duration_seconds_count 161
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 2240
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.2721e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 9.85093704e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.468694e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.318533e+06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 1.353728e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.2721e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.677056e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 2.3650304e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 57151
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 2.932736e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.467363573956452e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 15646
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.375684e+06
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 28800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 32768
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 269520
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 278528
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 2.9841558e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 4.114402e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.0518528e+07
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.0518528e+07
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 4.7094008e+07

I can leave it running for a couple of days and post it again..
This is from Prometheus / Collected from CAdvisor (container_memory_usage_bytes{name=~".rabbitmq-exporter."}). It grows linear from Tue Jun 28 2016 09:49:43 GMT+0200 (CEST) until Fri Jul 01 2016 08:22:58 GMT+0200 (CEST):

1467100183000   16478208 (~16MB)
1467354178000 774664192 (~770MB)

@kbudde
Copy link
Owner

kbudde commented Jul 1, 2016

Hi, Thank you.

I will compare it to my instances.

Another thing. There was a new a release on 26.june. Are you using this version? It fixed a memory leak.

Perhaps just try docker pull kbudde/rabbitmq-exporter to check if there is a newer version. It should log as well the version number and build date during start up if it is the latest version

BR Kris

@kbudde
Copy link
Owner

kbudde commented Jul 8, 2016

Hi,

I was checking my data without success. No leakage in the docker container image.

Can you verify that you are using the latest release?

@kbudde kbudde added the bug label Jul 8, 2016
@kbudde kbudde self-assigned this Jul 8, 2016
@KekSfabrik
Copy link
Author

so i've pulled the repo a couple of days ago (on the 7th) and re-built a docker image from it (make promu, make docker with a slightly modified Dockerfile; sets various ENV variables like a different port and sort of company test-default-credentials for rabbits etc) - it has been running over the weekend and here's what that looks like after 3 days (its around 900MB)

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.015512734
go_gc_duration_seconds{quantile="0.25"} 0.019409751000000003
go_gc_duration_seconds{quantile="0.5"} 0.021118249000000002
go_gc_duration_seconds{quantile="0.75"} 0.022702762
go_gc_duration_seconds{quantile="1"} 0.027683647000000002
go_gc_duration_seconds_sum 23.393296186
go_gc_duration_seconds_count 2330
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 63151
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 4.02893784e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 9.4129928792e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.617486e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 9.55495422e+08
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.39104e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.02893784e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.1074688e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.10894336e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 1.441806e+06
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 4.51969024e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.468220688342574e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 449394
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 9.56937228e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 28800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 32768
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 6.74172e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 6.897664e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 7.33026075e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.2348434e+07
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.6001408e+08
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.6001408e+08
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.56789856e+08

@kbudde
Copy link
Owner

kbudde commented Jul 14, 2016

I'm still trying to find out the root cause of your problem. It's hard.
I was able to reproduce a problem which looks like a memory leak.

I created some queues (2000). Prometheus was configured to fetch the data every second. I've noticed a high cpu load of the exporter (90%) and an increasing number of goroutines as the fetch requests were pilling up.

What I've noticed in this case is that retrieving the metrics takes a long time. What happens if you call the endpoint manually (/metrics). It should return faster than your fetch intervall.

Can you please check the log of the exporter as well. It should be just "Metrics updated successfully" and the timestamp. The time between two log entries should be as long as the prometheus fetch intervall.

How many queues/connections/exchanges do you have? How often is the data fetched?

Another thing I've noticed. I'm using the default http client which is not good as there is no timeout by default. I will prepare a new version with fetch timeout.
Can you please post the metrics again but this time with go_* and process_* data please. The process_* data includes some information about open file descriptors.

The last thing. Are you using a (ha)proxy between exporter and rabbitmq or prometheus. If yes can you post which one and the configuration for it.

BR

@kbudde
Copy link
Owner

kbudde commented Jul 27, 2016

Hi,
@KekSfabrik
can you try the latest version if the leak is fixed?
Do you ue a proxy in front of the exporter/rabbitmq?

BR

@kbudde
Copy link
Owner

kbudde commented Aug 18, 2016

HI,

@KekSfabrik I will close the issue for now. If you still have issues we have to check what we can do.

BR

Kris

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants