Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Auto merge of #130679 - saethlin:inline-usually, r=<try>
Add inline(usually) I'm looking into what kind of things could recover the perf improvement detected in #121417 (comment). I think it's worth spending quite a bit of effort to figure out how to capture a 45% incr-patched improvement. As far as I can tell, the root cause of the problem is that we have taken very deliberate steps in the compiler to ensure that `#[inline(always)]` causes inlining where possible, even when all optimizations are disabled. Some of the reasons that was done are now outdated or were misguided. I think the only remaining use case is where the inlined body even without optimizations is cheaper to codegen or call, for example SIMD intrinsics may require a lot of code to put their arguments on the stack, which is slow to compile and run. I'm quite sure that the majority of users applied this attribute believing it does not cause inlining in unoptimized builds, or didn't appreciate the build time regressions that implies and would prefer it didn't if they knew. (if that's you, put a heart on this or say something elsewhere, don't reply on this PR) I am going to _try_ to use the existing benchmark suite to evaluate a number of different approaches and take notes here, and hopefully I can collect enough data to shape any conversation about what we can do to help users. The core of this PR is `InlineAttr::Usually` (name doesn't matter) which ensures that when optimizations are enabled that the function is inlined (usual exceptions like recursion apply). I think most users believe this is what `#[inline(always)]` does. #130685 (comment) Replaced `#[inline(always)]` with `#[inline(usually)]` in the standard library, and did not recover the same 45% incr-patched improvement in regex. It's a tidy net positive though, and I suspect that perf improvement would normally be big enough to motivate merging a change. I think that means the standard library's use of `#[inline(always)]` is imposing marginal compile time overhead on the ecosystem, but the bigger opportunities are probably in third-party crates. #130679 (comment) Treats `#[inline(always)]` as `#[inline(usually)]` literally everywhere; this gets the desired incr-patched improvement but suffers quite a few check and doc regressions. I think that means that `alwaysinline` is more powerful than `function-inline-cost=0` in LLVM. #130679 (comment) Treats `#[inline(always)]` as `#[inline(usually)]` when `-Copt-level=0`, which looks basically the same as #121417 (comment) (omit `alwaysinline` when doing `-Copt-level=0` codegen). #130679 (comment) replaces `alwaysinline` with a very negative inline cost, and it still has check and doc regressions. More investigation required on what the different inlining decision is. #130679 (comment) is a likely explanation of this, with some interesting implications; adding `inline(always)` to a function that was going to be inlined anyway can change change optimizations (usually it seems to improve things?). #130679 (comment) makes `#[inline(usually)]` also defy instantiation mode selection and always be LocalCopy the way `#[inline(always)]` does, but still has regressions in stm32f4. I think that proves that `alwaysinline` can actually improve debug build times. #130679 (comment) infers `alwaysinline` for extremely trivial functions, but still has regressions for stm32f4. But of course it does, I left `inline(always)` treated as `inline(usually)` which slows down the compiler 🤦 inconclusive perf run. TODO: What happens if we infer `alwaysinline` for extremely small functions like most of those in stm32f4?
- Loading branch information