You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can't post the actual code, but what it does is basically:
f, _:=os.Create(path)
bufF:=bufio.NewWriter(f) // this may very well be unnecessarygzF:=gzip.NewWriter(bufF)
buf:=make([]byte, 0, 512)
for_, record:=rangerecords {
record.writeTo(buf)
gzF.Write(buf)
buf=buf[:0]
}
So, lots of small records, each is written to byte slice first (with strconv.AppendUint and the like), byte slice is sent to pgzip. Nothing fancy.
Here is a snippet of alloc_space heap profile for program using pgzip to write ~770MB file:
During compression, memory usage of program grows by ~5GB. With regular gzip, memory usage is constant during compression.
Am I reading this right that
dictFlatePool
is not helping as it should? Shouldn't the number of compressors be limited by the number of blocks?The text was updated successfully, but these errors were encountered: