-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom cc toolchain with libunwind creates link problems #2458
Comments
Have you tried enabling |
Thanks a lot! Now it gets further, but at
it still fails with the same clang error. |
Would you mind sharing how your rust toolchains are set up? One problem I ran into before is that host toolchain needs to match with Now that you separate linking into a separate action by enabling |
This is indeed a bug (I believe): rules_rust/rust/private/rustc.bzl Line 1147 in bbe2cf4
checks whether the rule context has the experimental_use_cc_common_link attribute, and if it doesn't, cc_common.link is never considered even a possibility.Checking rust_proc_macro in https://github.com/bazelbuild/rules_rust/blob/main/rust/private/rust.bzl#L985, we see that it's lacking _experimental_use_cc_common_link_attrs in its attrs, and therefore, the aforementioned check is always false.
Now I don't know your codebase well enough, whether (my rust toolchain setup is rather boring, I'm downloading a stable 1.62.2 with the auto-setup toolchain) |
Hmm you're right. If you force rules_rust/rust/private/rustc.bzl Lines 1146 to 1153 in bbe2cf4
|
Unfortunately, it's not so easy. However, adding the The problem here is that When enabling the code path just for
as that code path doesn't support building (?) and linking shared libraries. The whole I can workaround this whole problem by specifying a separate exec toolchain that doesn't inject all these linker flags (as proc macros get compiled for the host, not the target), but that comes at the cost of maintaining that separate toolchain, and a (potentially) slow autodetect. I'd rather not do that. |
It is typical (always) that we have at least two toolchains in place: one for target and another for exec. The exec toolchain is responsible for building things like process_wrapper. These toolchains are resolved automatically by Bazel. See https://bazel.build/extending/toolchains#toolchain-resolution. So I do not expect an abnormal slow down when you have 2 toolchains to resolve. Can you elaborate why adding a exec toolchain will slow down the build? |
I know - we just usually have target=exec, so we configure the same C++ toolchain here, which is a self-contained toolchain we build ourselves. Now there are two ways forward:
|
(I think) I see what you mean now. If you already have a host toolchain, you shouldn't need another one. In this case since target=exec (always?), the toolchain is just building host tools, correct?
I believe this is something we can easily add. If you don't mind, can you run the build with |
Yes exactly. The peculiarity here (why this is even a viable workaround!) is that
That'd be great! I couldn't figure it out myself, but with a pointer or two about how to implement this in I'll supply the toolchain resolutions on Monday. |
Can you try applying this patch to rules_rust repo locally? diff --git a/rust/private/rustc.bzl b/rust/private/rustc.bzl
index eff542eb..49b259d5 100644
--- a/rust/private/rustc.bzl
+++ b/rust/private/rustc.bzl
@@ -1150,6 +1150,8 @@ def rustc_compile_action(
experimental_use_cc_common_link = True
elif ctx.attr.experimental_use_cc_common_link == -1:
experimental_use_cc_common_link = toolchain._experimental_use_cc_common_link
+ elif crate_info.type == "proc-macro":
+ experimental_use_cc_common_link = toolchain._experimental_use_cc_common_link |
That will lead to the problem of the missing
showing that the code is somehow not ready to link dynamic (shared) libraries. |
Ok yeah. This seems like a bug in the rules. I'll dig more into this on Monday. In the meantime, if you can provide some steps to reproduce the issue, I appreciate it. |
There's a full repro in https://github.com/criemen/rules_rust_bug_repro, which configures a C++ toolchain that doesn't ship There's some options in When building that workspace with the current version of
I get
as described in the bug report above, too. |
I investigated this more, and now I don't know what to do anymore: However, the fix doesn't work. There's a long, somewhat relevant discussion about linking C++ and rust, and not having My takeaway from this is that it's not possible to support |
I worked around this problem in the end by providing an empty |
Hi,
I'm using a custom (bundled) cpp toolchain on Linux, which ships (besides other things) LLVM's
libunwind
.I was excited to see that my bundled toolchain is also used for rust compilations - both as part of dependant C/C++ code, and for linking rust executables. This is key for me, as we're shipping an old glibc we want to link against, so our executables are compatible a wide range of distros.
However, I'm running into a problem already when building the process wrapper included in
rules_rust
:The
rustc
invocation coming out ofrules_rust
looks like this:Note that it passes all the linker options from my custom toolchain, included
--unwindlib=libunwind
and-static-libgcc
torustc
's--codegen 'link-args...
option.rustc
invokes clang aswhich fails with
This is because before passing in our (custom) link flags, we're getting a bunch of linker flags supplied from somewhere else (I've not been able to find out where, despite searching in
rust-lang/rust
), including-lgcc_s
, and the linker also (now correctly) warns that some of our options aren't used.As our toolchain uses
libunwind
for unwinding,libgcc_s
is not present in the toolchain, and the linker complains correctly about its absence.Is there any way to tell
rustc
not to insert these linker flags?The only really hacky workaround I see here is a linker-wrapper script that detects being invoked from
rustc
, and then filters out the harmful CLI arguments, but if there's any other chance of solving this, I really don't want to go there.The text was updated successfully, but these errors were encountered: