Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

substantial GC overhead with per message deflate compression #193

Closed
y3llowcake opened this issue Dec 15, 2016 · 4 comments
Closed

substantial GC overhead with per message deflate compression #193

y3llowcake opened this issue Dec 15, 2016 · 4 comments

Comments

@y3llowcake
Copy link
Contributor

I'm noticing a large spikes in heap allocs and GC time when message write volume is high.
My profiles lead me towards flate.NewWriter() in compressNoContextTakeover().

I believe this is causing an allocation of a new sliding window (32k?) for each compressed write. This seems avoidable if the writers are recycled via flate.Writer.Reset().

Probably should be statically allocating the flate writer per connection and also provide an option to get flate writers from a sync.pool.

@garyburd
Copy link
Contributor

Start by using a sync.Pool for flate writers and see how that works. I want to avoid adding new knobs in the API.

@y3llowcake
Copy link
Contributor Author

#194

@y3llowcake
Copy link
Contributor Author

readers: #195

@garyburd
Copy link
Contributor

garyburd commented Jan 1, 2017

Fixed by 2db2f66 and 3ab3a8b.

@garyburd garyburd closed this as completed Jan 1, 2017
@gorilla gorilla locked and limited conversation to collaborators Feb 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants