-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster compile times for release builds with llvm fix #45054
Conversation
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @nikomatsakis (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. |
📌 Commit 8fd3c8f has been approved by |
Faster compile times for release builds with llvm fix Run global optimizations after the inliner to avoid spending time on optimizing dead code. fixes #44655
☀️ Test successful - status-appveyor, status-travis |
@andjo403 hm I seem to surprisingly see no difference in optimized benchmarks on perf.rust-lang.org, mind double-checking locally to see if this is fixed? |
@alexcrichton I'm as surprised as you. did not think of do a extra test to see that the faster compile time was there when the submodule was updated due to hade made so many test before for the PR to the llvm repo. the strange part is that I see that the extra passes is present in printout from |
@alexcrichton @dcci the current llvm patches is do not give as much gain as my original did. Have added a comment in the llvm review why the original placement of the passes was not good. |
Ok thanks! I'll watch the LLVM bug |
@alexcrichton there was no update to the LLVM trigger file, maybe that's why? Edit: that would only explain the problem on the bots though, not that it's reproducible locally |
@dotdash ah unfortunately that wouldn't cause this, our CI now unconditionally builds LLVM from scratch on all builds (using sccache to not be super slow), so that'd account for local checkouts not rebuilding but not CI |
sadly this is all my fault. failed to the test the updated patch from llvm before making this PR and according to comment in llvm review my first patch that showed the big improvements
and the way forward is
Edit: but do not know how to do that |
@andjo403 I sort of forget at this point, your original fix, was there a known problem with it? |
@alexcrichton I hade no problems with it, but it sounds bad to brake the entire pipelining model of llvm. |
@alexcrichton from what I have understand/learned from the llvm review I think that most of the time improvement was due to I broke the pipeline. so think that I will close the ticket #44655 as it is not possible to implement edit: but after writing this I started to think of the improvements of the compile time that we hade. so even if the binary was less optimized it was faster |
@andjo403 alas :( |
Run global optimizations after the inliner to avoid spending time on optimizing dead code.
fixes #44655