-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Batch parquet primitive decoding #17462
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
coastalwhite
requested review from
ritchie46,
stinodego,
orlp and
c-peters
as code owners
July 6, 2024 12:57
coastalwhite
changed the title
perf: batch parquet primitive decoding
perf: Batch parquet primitive decoding
Jul 6, 2024
github-actions
bot
added
performance
Performance issues or improvements
python
Related to Python Polars
rust
Related to Rust Polars
and removed
title needs formatting
labels
Jul 6, 2024
This is rather large change that changes quite fundamentally how HybridRLE and Parquet decoding works. There are now two important concepts that speed up the performance of the Parquet reader while utilizing less memory than before. This does how increase the complexity of the code. First, for a benchmark using the NYC Yellow-Taxi dataset (we decode the whole dataset 100x). Here, we see the following results. No maximum threads: ``` Benchmark 1: After Optimization Time (mean ± σ): 4.918 s ± 0.076 s [User: 28.748 s, System: 2.486 s] Range (min … max): 4.819 s … 5.064 s 10 runs Benchmark 2: Before Optimization Time (mean ± σ): 7.333 s ± 2.144 s [User: 60.374 s, System: 3.054 s] Range (min … max): 5.416 s … 11.132 s 10 runs Summary After Optimization ran 1.49 ± 0.44 times faster than Before Optimization ``` Maximum threads = 1: ``` Benchmark 1: After Optimization Time (mean ± σ): 18.452 s ± 0.054 s [User: 16.058 s, System: 2.325 s] Range (min … max): 18.332 s … 18.511 s 10 runs Benchmark 2: Before Optimization Time (mean ± σ): 27.027 s ± 0.062 s [User: 24.668 s, System: 2.271 s] Range (min … max): 26.912 s … 27.105 s 10 runs Summary After Optimization ran 1.46 ± 0.01 times faster than Before Optimization ``` This PR introduces the concepts of a `Translator` and a `BatchedCollector`. The `Translator` trait maps from hybrid RLE encoded values to an arbitrary set of values. The HybridRLEDecoder will can then collect and call the translator with batches of values. This way we minimize the amount of iterator polls and we do not need to allocate any more on the heap except the output buffer. This does however mean that the whole HybridRLEDecoder needs to be aware of the `Translator` trait. Furthermore, the `HybridRLEDecoder` can now itself buffer, instead of using the `BufferedHybridRleDecoderIter` that was used before. Again, this allows minimal memory consumption and prevents constant polling. The `BatchedCollector` is essentially a wrapper for the `Pushable` trait that automatically optimizes sequential pushes of valid and invalid values. It also allows for efficient skipping of values. Overall, this change significantly speeds up the Parquet reader and extensive testing was done to ensure that no invalid data gets produced. But it is difficult to test all edge-cases. From here, we can start incorporating the `BatchedCollector` and `Translator` traits in more places. In general, the `HybridRleDecoder` iterator implementation should effectively never be used.
coastalwhite
force-pushed
the
parquet-batch-decoding
branch
from
July 6, 2024 13:00
e0f958d
to
2370b0f
Compare
ritchie46
approved these changes
Jul 6, 2024
coastalwhite
added a commit
to coastalwhite/polars
that referenced
this pull request
Jul 10, 2024
This PR is a follow up to pola-rs#17462. This batches the collects in the nested Parquet decoders, with that we can also simplify the code quite a lot. I did a benchmark where we had one column `{ 'x': pl.List(pl.Int8) }` of length `10_000_000`. Then, we read that Parquet file 50 times. Here are the results. ``` Benchmark 1: After Optimization Time (mean ± σ): 3.398 s ± 0.064 s [User: 49.412 s, System: 4.362 s] Range (min … max): 3.311 s … 3.490 s 10 runs Benchmark 2: Before Optimization Time (mean ± σ): 4.135 s ± 0.015 s [User: 59.506 s, System: 5.234 s] Range (min … max): 4.105 s … 4.149 s 10 runs Summary After Optimization ran 1.22 ± 0.02 times faster than Before Optimization ```
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
accepted
Ready for implementation
performance
Performance issues or improvements
python
Related to Python Polars
rust
Related to Rust Polars
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is rather large change that changes quite fundamentally how HybridRLE and Parquet decoding works. There are now two important concepts that speed up the performance of the Parquet reader while utilizing less memory than before. This does how increase the complexity of the code.
First, for a benchmark using the NYC Yellow-Taxi dataset (we decode the whole dataset 100x). Here, we see the following results.
No maximum threads:
Maximum threads = 1:
This PR introduces the concepts of a
Translator
and aBatchedCollector
.The
Translator
trait maps from hybrid RLE encoded values to an arbitrary set of values. The HybridRLEDecoder will can then collect and call the translator with batches of values. This way we minimize the amount of iterator polls and we do not need to allocate any more on the heap except the output buffer. This does however mean that the whole HybridRLEDecoder needs to be aware of theTranslator
trait.Furthermore, the
HybridRLEDecoder
can now itself buffer, instead of using theBufferedHybridRleDecoderIter
that was used before. Again, this allows minimal memory consumption and prevents constant polling.The
BatchedCollector
is essentially a wrapper for thePushable
trait that automatically optimizes sequential pushes of valid and invalid values. It also allows for efficient skipping of values.Overall, this change significantly speeds up the Parquet reader and extensive testing was done to ensure that no invalid data gets produced. But it is difficult to test all edge-cases.
From here, we can start incorporating the
BatchedCollector
andTranslator
traits in more places. In general, theHybridRleDecoder
iterator implementation should effectively never be used.