-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding compressor to archive/compress #191
adding compressor to archive/compress #191
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rework this good feature to having new sub package
and keep existing test framework
archive/compress/compressor.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file must split into a complete sub package with a New function
and if the public struct is keep, prevent any seg fault with an empty struct called without New function
archive/compressor_test.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is not like other testing in same package.
Please use ginkgo / gomega and integate it into existing test suite
return 0, io.EOF | ||
} | ||
|
||
if c.buffer.Len() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a loop with a check of c.buffer.len() < cap(outputBuffer) could be better:
compress algo can work internally by chuck and keep some data until finishing the compress process and closing the writer
to prevent that, you will fill a buffer until the awaiting len is in buffer
carefully to clean the internal buffer after read
import "io" | ||
|
||
// Read for compressor compresses the data and reads it from the buffer in chunks. | ||
func (c *compressor) Read(outputBuffer []byte) (n int, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try to keep standard parameters of IO Read interface : Read(p []byte) (n int, err error)
if the io interface don't specify the param name, choise one, but if the standard interface speciafy on, try to keep this
// fill handles compressing data from the source and writing to the buffer. | ||
func (c *compressor) fill() (n int, err error) { | ||
|
||
var tempBuffer = make([]byte, ChunkSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggest: use cap of read buffer p instead of arbitrary size
import "io" | ||
|
||
// Read for decompressor reads decompressed data from the buffer in chunks. | ||
func (d *decompressor) Read(outputBuffer []byte) (n int, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use reader of algorithm ?
by default, all agorithm implement :
- read => decompress
- write => compress
type Helper interface { | ||
Compress(io.Reader) error | ||
Decompress(io.Reader) error | ||
Read([]byte) (int, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use io.ReadCloser instead of redeclare Read / Close function
} | ||
|
||
// bufferRWCloser wraps a bytes.Buffer to implement io.ReadWriteCloser. | ||
type bufferRWCloser struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this as alone file to golib/archive/bufferCloser
} | ||
|
||
// newBufferCloser converts a bytes.Buffer to an io.ReadWriteCloser. | ||
func newBufferRWCloser(buffer *bytes.Buffer) io.ReadWriteCloser { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep standard convention Naming : NopBufferCloser(buf *bytes.Buffer)
compressor *compressor | ||
decompressor *decompressor | ||
algo compress.Algorithm | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compressor / uncompressor must implement same interface
as that store only the right instance and the algo
please use shortess var name if possible
make if safe concurrent use
Type of Change
Please select the type of change your PR introduces by checking the appropriate box:
Description
Added the compressor model to the archive/compress + tests