-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Hybrid Compression #13110
Comments
@sarthakaggarwal97 We already have an issue to discuss this #11605 Could we use the same issue to continue discussion? |
@sarthakaggarwal97 any overlap with #12948 which suggest to completely offload |
@reta yeah, we are working on optimizing the flows for stored fields. Currently, |
Is your feature request related to a problem? Please describe
In Lucene, we have Stored Fields. The text in the such fields is stored in the index literally, in a non-inverted manner.
OpenSearch, by default, makes the
_source
field of an index as stored. The users have an option to store other fields of the document as well. These fields are directly compressed and stored in the.fdt
file of the segment. We use Index Codecs to determine the compression algorithm that will be used for compress and decompress these stored fields.The compression of these fields in the write path is dependent on two conditions: Chunk Size and the Number of Documents.
With Hybrid Compression, we would take the compression off the write path and would store the data as is in the segments. During merges, when the segment size have reached a certain threshold, then we would compress the segments.
We are looking to save up on compute of compression during writes to improve latency and throughput with a trade off on disk.
Describe the solution you'd like
How to decide when we would perform compression?
During the initialization of merges between segments, the Lucene estimates the size of each to be merged segment as estimatedMergeBytes. We will leverage this value in the SegmentInfo and use our own set thresholds to compare with estimatedMergeBytes.
In OpenSearch, we would initiate the compression once the segments have breached these thresholds. These thresholds would be dynamically configurable with an index setting.
This is POC implementation of the change in OpenSearch. Since we would be required to create new index codecs, we can direct this change to custom-codecs as well.
Since we do not have estimatedMergeBytes in the SegmentInfos, we would see a change in lucene as well.
What are the cases where users would Hybrid Compression the most?
It is expected that hybrid compression would be useful for search and update use cases, specially when the users access or update recently indexed data since we will save up compression / decompression compute.
Benchmarks:
Workload: NYC Taxis
We tested three hybrid compression size thesholds: 16mb, 32mb and 64mb.
The results are of 64mb (looked to be best amongst others)
Note: +ve means improvement, -ve means degradation from the current behaviour.
Variance in Storage during the indexing of NYC Taxis Workload
There are steeper dips in the storage of the disk but it quickly recovers as we reach the 64 mb segment size thresholds.
Hybrid Compression:
Default Compression
Benchmarking Setup:
The text was updated successfully, but these errors were encountered: