-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BTree: avoid most unsafe code in iterators #86195
Conversation
@bors try @rust-timer queue |
Awaiting bors try build completion. @rustbot label: +S-waiting-on-perf |
⌛ Trying commit f074e79158e6b8558a80dbfeae8da79d0a62ae82 with merge 402059575e35429b87602381a02705d86b3fd88b... |
☀️ Try build successful - checks-actions |
Queued 402059575e35429b87602381a02705d86b3fd88b with parent dd94145, future comparison URL. |
Finished benchmarking try commit (402059575e35429b87602381a02705d86b3fd88b): comparison url. Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. Please note that if the perf results are neutral, you should likely undo the rollup=never given below by specifying Importantly, though, if the results of this run are non-neutral do not roll this PR up -- it will mask other regressions or improvements in the roll up. @bors rollup=never |
7d8717a
to
40079f6
Compare
Performance seem to suffer a tiny bit, much less than the influence of #[inline]. Last commit is a plain rebase, but restoring the #[inline] on the public |
@bors try @rust-timer queue |
Awaiting bors try build completion. @rustbot label: +S-waiting-on-perf |
⌛ Trying commit 40079f6f60f1b522b70e340353b58caafeab8f63 with merge 4ef624de06d9a49d294a504b745673d694b1885a... |
☀️ Try build successful - checks-actions |
Queued 4ef624de06d9a49d294a504b745673d694b1885a with parent 9d93819, future comparison URL. |
Finished benchmarking try commit (4ef624de06d9a49d294a504b745673d694b1885a): comparison url. Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. Please note that if the perf results are neutral, you should likely undo the rollup=never given below by specifying Importantly, though, if the results of this run are non-neutral do not roll this PR up -- it will mask other regressions or improvements in the roll up. @bors rollup=never |
…p, r=Mark-Simulacrum BTree: consistently avoid unwrap_unchecked in iterators Some iterator support functions named `_unchecked` internally use `unwrap`, some use `unwrap_unchecked`. This PR tries settling on `unwrap`. rust-lang#86195 went up the same road but travelled way further and doesn't seem successful. r? `@Mark-Simulacrum`
40079f6
to
dc3c8df
Compare
Not sure this is worth the trouble but here it is rebased. |
@bors try @rust-timer queue |
Awaiting bors try build completion. @rustbot label: +S-waiting-on-perf |
⌛ Trying commit dc3c8df with merge f0274b9b17d524ee8d22d030f9ec8a9f95d70f29... |
☀️ Try build successful - checks-actions |
Queued f0274b9b17d524ee8d22d030f9ec8a9f95d70f29 with parent 432e145, future comparison URL. |
Finished benchmarking try commit (f0274b9b17d524ee8d22d030f9ec8a9f95d70f29): comparison url. Summary: This change led to significant regressions 😿 in compiler performance.
If you disagree with this performance assessment, please file an issue in rust-lang/rustc-perf. Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR led to changes in compiler perf. Next Steps: If you can justify the regressions found in this perf run, please indicate this with @bors rollup=never |
Bootstrap timing went up from 0.1% over 0.2% to 0.3% as other differences were ironed out. Is it trying to say something? |
Let's see what performance we get by removing most of the
_unchecked
distinction in the internal iterator API, which is supposed to speed up iterators that track the remaining length while iterating and therefore never hit the "end" of a tree. We still let these iterators track remaining length because that's also needed to offer accuratesize_hint
s to BTree clients.The essential difference are simple null checks: we pass around
Option<Handle<…>>
instead ofHandle<…>
(where the handle has aNonNull
niche).Alloc's benchmarks are indecisive: depending on how much of these changes are considered, some benchmarks win and some loose.
r? @Mark-Simulacrum