Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding compressor to archive/compress #191

Merged
merged 5 commits into from
Oct 23, 2024

Conversation

sabouaram
Copy link
Contributor

Type of Change

Please select the type of change your PR introduces by checking the appropriate box:

  • Fixes an issue
  • Adds a new feature
  • Refactor
  • Documentation update
  • Other (please describe it in the Description Section)

Description

Added the compressor model to the archive/compress + tests

Copy link
Owner

@nabbar nabbar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rework this good feature to having new sub package
and keep existing test framework

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file must split into a complete sub package with a New function
and if the public struct is keep, prevent any seg fault with an empty struct called without New function

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is not like other testing in same package.
Please use ginkgo / gomega and integate it into existing test suite

return 0, io.EOF
}

if c.buffer.Len() == 0 {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a loop with a check of c.buffer.len() < cap(outputBuffer) could be better:
compress algo can work internally by chuck and keep some data until finishing the compress process and closing the writer
to prevent that, you will fill a buffer until the awaiting len is in buffer
carefully to clean the internal buffer after read

import "io"

// Read for compressor compresses the data and reads it from the buffer in chunks.
func (c *compressor) Read(outputBuffer []byte) (n int, err error) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try to keep standard parameters of IO Read interface : Read(p []byte) (n int, err error)
if the io interface don't specify the param name, choise one, but if the standard interface speciafy on, try to keep this

// fill handles compressing data from the source and writing to the buffer.
func (c *compressor) fill() (n int, err error) {

var tempBuffer = make([]byte, ChunkSize)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest: use cap of read buffer p instead of arbitrary size

import "io"

// Read for decompressor reads decompressed data from the buffer in chunks.
func (d *decompressor) Read(outputBuffer []byte) (n int, err error) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use reader of algorithm ?
by default, all agorithm implement :

  • read => decompress
  • write => compress

type Helper interface {
Compress(io.Reader) error
Decompress(io.Reader) error
Read([]byte) (int, error)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use io.ReadCloser instead of redeclare Read / Close function

}

// bufferRWCloser wraps a bytes.Buffer to implement io.ReadWriteCloser.
type bufferRWCloser struct {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this as alone file to golib/archive/bufferCloser

}

// newBufferCloser converts a bytes.Buffer to an io.ReadWriteCloser.
func newBufferRWCloser(buffer *bytes.Buffer) io.ReadWriteCloser {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep standard convention Naming : NopBufferCloser(buf *bytes.Buffer)

compressor *compressor
decompressor *decompressor
algo compress.Algorithm
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compressor / uncompressor must implement same interface
as that store only the right instance and the algo
please use shortess var name if possible
make if safe concurrent use

@nabbar nabbar changed the base branch from master to archive-helper-wip October 23, 2024 11:51
@nabbar nabbar merged commit db96f9d into nabbar:archive-helper-wip Oct 23, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants