Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for LZ4_RAW compression. (#1604) #2943

Merged
merged 3 commits into from
Oct 27, 2022

Conversation

marioloko
Copy link
Contributor

Which issue does this PR close?

Closes #1604

Rationale for this change

What changes are included in this PR?

The implementation of an LZ4_RAW codec using LZ4 block algorithm.

Are there any user-facing changes?

No changes in the API, but parquet files compressed using LZ4_RAW will be supported now.

* This adds the implementation of LZ4_RAW codec by using lz4 block compression algorithm. (apache#1604)
* This commit uses https://stackoverflow.com/questions/25740471/lz4-library-decompressed-data-upper-bound-size-estimation formula to estime the size of the uncompressed size. As it said in thread this algorithm over-estimates the size, but it is probably the best we can get with the current decompress API. As the size of a arrow LZ4_RAW block is not prepended to the block.
* Other option would be to take the C++ approach to bypass the API (https://github.com/apache/arrow/blob/master/cpp/src/arrow/util/compression_lz4.cc#L343). This approach consists on relaying on the output_buffer capacity to guess the uncompress_size. This works as `serialized_reader.rs` already knows the uncompressed_size, as it reads it from the page header, and allocates the output_buffer with a capacity equal to the uncompress_size (https://github.com/marioloko/arrow-rs/blob/master/parquet/src/file/serialized_reader.rs#L417). I did not follow this approach because:
    1. It is too hacky.
    2. It will limit the use cases of the `decompress` API, as the caller will need to know to allocate the right uncompressed_size.
    3. It is not compatible with the current set of tests. However, new test can be created.
@github-actions github-actions bot added the parquet Changes to the parquet crate label Oct 26, 2022
Copy link
Contributor

@tustvold tustvold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you

output_buf: &mut Vec<u8>,
) -> Result<usize> {
let offset = output_buf.len();
let required_len = max_uncompressed_size(input_buf.len());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Longer term it would be nice to plumb the decompressed size down, as we do actually know what it is from the page header

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created #2956

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do the changes for that, I guess it is quite straightforward. Should I add the enhancement to this PR? Or create a new one for that specific change?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do it as a separate PR, as the other codecs may also benefit from it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I will wait for this PR to merge first,as the enhancement code changes will probably conflict with this PR.

@tustvold
Copy link
Contributor

I took the liberty of fixing the clippy lints and adding a basic integration test

@tustvold tustvold merged commit 4e1247e into apache:master Oct 27, 2022
@ursabot
Copy link

ursabot commented Oct 27, 2022

Benchmark runs are scheduled for baseline = 880c4d9 and contender = 4e1247e. 4e1247e is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Skipped ⚠️ Benchmarking of arrow-rs-commits is not supported on ec2-t3-xlarge-us-east-2] ec2-t3-xlarge-us-east-2
[Skipped ⚠️ Benchmarking of arrow-rs-commits is not supported on test-mac-arm] test-mac-arm
[Skipped ⚠️ Benchmarking of arrow-rs-commits is not supported on ursa-i9-9960x] ursa-i9-9960x
[Skipped ⚠️ Benchmarking of arrow-rs-commits is not supported on ursa-thinkcentre-m75q] ursa-thinkcentre-m75q
Buildkite builds:
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

@marioloko marioloko deleted the lz4_raw_compression branch October 27, 2022 21:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parquet Changes to the parquet crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update parquet thrift to 2.9.0 to support LZ4_RAW compression
3 participants