Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce PostingsForMatchersCache.expire() pressure on mutex #734

Merged
merged 2 commits into from
Oct 30, 2024

Conversation

pracucci
Copy link
Collaborator

Currently, PostingsForMatchersCache.expire() is called for every single PostingsForMatchers() call. If the cache is full, it's expected that there's always something to expire from the cache (1 in, 1 out). However, in a high concurrency environment, there's no need to call expire() concurrently. We just need 1 goroutine to cleanup stale or over-the-capacity cached entries.

In this PR I propose a simple solution to only 1 expire() execution at a time.

I've added a new benchmark:

goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/tsdb
cpu: Apple M3 Pro
                                                          │  before.txt  │              after.txt              │
                                                          │    sec/op    │   sec/op     vs base                │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   1653.0n ± 4%   335.1n ± 8%  -79.73% (p=0.000 n=10)

                                                          │ before.txt  │              after.txt              │
                                                          │    B/op     │    B/op      vs base                │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   1446.5 ± 0%   1004.0 ± 0%  -30.59% (p=0.000 n=10)

                                                          │ before.txt │             after.txt              │
                                                          │ allocs/op  │ allocs/op   vs base                │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   29.00 ± 0%   20.00 ± 0%  -31.03% (p=0.000 n=10)

Copy link
Contributor

@56quarters 56quarters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM. It seems like this is a case where RWLock.TryRLock() could be used but I don't feel strongly about using it instead of the atomic here.

@pracucci
Copy link
Collaborator Author

Changes LGTM. It seems like this is a case where RWLock.TryRLock() could be used but I don't feel strongly about using it instead of the atomic here.

I tried it, but it looks slower. Here is the comparison between this PR vs Try...Lock:

goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/tsdb
cpu: Apple M3 Pro
                                                          │ before.txt  │              after.txt               │
                                                          │   sec/op    │   sec/op     vs base                 │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   303.8n ± 2%   795.8n ± 2%  +161.93% (p=0.000 n=10)

                                                          │ before.txt  │              after.txt              │
                                                          │    B/op     │    B/op      vs base                │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   1016.0 ± 0%   1371.5 ± 0%  +34.99% (p=0.000 n=10)

                                                          │ before.txt │             after.txt              │
                                                          │ allocs/op  │ allocs/op   vs base                │
PostingsForMatchersCache_ConcurrencyOnHighEvictionRate-11   21.00 ± 0%   27.00 ± 0%  +28.57% (p=0.000 n=10)

Note: it's still faster than main, but not as fast as this PR.

@56quarters
Copy link
Contributor

Makes sense, thanks!

@pracucci pracucci merged commit 6c26030 into main Oct 30, 2024
9 checks passed
@pracucci pracucci deleted the reduce-expire-pressure branch October 30, 2024 08:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants