-
Notifications
You must be signed in to change notification settings - Fork 839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for LZ4_RAW compression. (#1604) #2943
Conversation
* This adds the implementation of LZ4_RAW codec by using lz4 block compression algorithm. (apache#1604) * This commit uses https://stackoverflow.com/questions/25740471/lz4-library-decompressed-data-upper-bound-size-estimation formula to estime the size of the uncompressed size. As it said in thread this algorithm over-estimates the size, but it is probably the best we can get with the current decompress API. As the size of a arrow LZ4_RAW block is not prepended to the block. * Other option would be to take the C++ approach to bypass the API (https://github.com/apache/arrow/blob/master/cpp/src/arrow/util/compression_lz4.cc#L343). This approach consists on relaying on the output_buffer capacity to guess the uncompress_size. This works as `serialized_reader.rs` already knows the uncompressed_size, as it reads it from the page header, and allocates the output_buffer with a capacity equal to the uncompress_size (https://github.com/marioloko/arrow-rs/blob/master/parquet/src/file/serialized_reader.rs#L417). I did not follow this approach because: 1. It is too hacky. 2. It will limit the use cases of the `decompress` API, as the caller will need to know to allocate the right uncompressed_size. 3. It is not compatible with the current set of tests. However, new test can be created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you
output_buf: &mut Vec<u8>, | ||
) -> Result<usize> { | ||
let offset = output_buf.len(); | ||
let required_len = max_uncompressed_size(input_buf.len()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Longer term it would be nice to plumb the decompressed size down, as we do actually know what it is from the page header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created #2956
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do the changes for that, I guess it is quite straightforward. Should I add the enhancement to this PR? Or create a new one for that specific change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do it as a separate PR, as the other codecs may also benefit from it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I will wait for this PR to merge first,as the enhancement code changes will probably conflict with this PR.
I took the liberty of fixing the clippy lints and adding a basic integration test |
Benchmark runs are scheduled for baseline = 880c4d9 and contender = 4e1247e. 4e1247e is a master commit associated with this PR. Results will be available as each benchmark for each run completes. |
Which issue does this PR close?
Closes #1604
Rationale for this change
LZ4_RAW
compression #1604)serialized_reader.rs
already knows the uncompressed_size, as it reads it from the page header, and allocates the output_buffer with a capacity equal to the uncompress_size https://github.com/marioloko/arrow-rs/blob/master/parquet/src/file/serialized_reader.rs#L417. I did not follow this approach because:decompress
API, as the caller will need to know to allocate the right uncompressed_size.What changes are included in this PR?
The implementation of an LZ4_RAW codec using LZ4 block algorithm.
Are there any user-facing changes?
No changes in the API, but parquet files compressed using LZ4_RAW will be supported now.