I wouls think so.
How does this differ from the proposal made by @farnz? @jsgf explained the downsides of that approach (and I've integrated that discussion into the revised RFC under "Alternatives").
As I understand it @farnz suggested to run a specific rustc command before running the linker that lists all rlibs and then pass the output to the linker. My suggestion on the other hand is to transparently wrap the linker, but when it finds that rlibs have been passed add whichever linker arguments are necessary to successfully link the rlibs. Also because it wraps the existing linker, if another language wants to take over linking, this language can simply be instructed to run rustc rather than whichever linker it wants to run, or if the language exposes a gcc/clang compatible interface rustc could be instructed to invoke the language's driver. In other words you could stack multiple linker wrappers. For example my_lang -> rustc-linker -> gcc -> ld or rustc-linker -> my_lang -> gcc -> ld. Or even my_lang -> gcc -> rustc-linker -> ld. And if no rlibs are used this linker wrapper will simply pass all arguments unchanged to the next command in the chain, thus doing no harm other than increasing link time by a couple of milliseconds.
That proposal doesn't seem to address @jsgf's concern of being able to introduce Rust into a project without changing the linker. The idea is that a C++ project shouldn't have to care about whether its libraries are ultimately written in Rust, C++, or some other language: it just invokes ld
:
A key goal here is the ability to integrate Rust into an existing build system while localizing the knowledge that any particular component is written in Rust. That is, so you can RIIR a module without having to worry about how all the downstream users of that module are built.
rlib-v0 needs build system support to handle all rust dependencies including the sysroot, while my proposal is simply wrapping the linker. This is much easier. And it can also be done automatically by a rust aware build system (like it would be under your proposal).
Just invoking ld doesn't work anyway. ld doesn't support static initializers, that is handled by collect2. In addition there are several platform dependent linker arguments necessary for a successful link. As such pretty much everyone uses either gcc or clang as wrapper around the linker. If you really want no changes to be necessary, maybe we can ask gcc and clang to add rustc-linker as additional wrapper if an rlib or rust dylib is detected?
Unfortunately I seriously doubt that getting changes into gcc and clang linkers will be feasible for all targets, especially for example Windows.
Getting Rust patches into clang and ELF-lld as a start should not be that hard. If you want to go down that road, you may want to go to LLVM discourse. Rust is shipping a current LLD. So patches to LLD will picked up by Rust soonish.
Lld or other linkers shouldn't be changed. I was suggesting a change to gcc, clang or other linker drivers which internally call the real linker. We don't ship any linker driver, rustc shipping lld doesn't help in any way.
Yeah, didn't think about the fact that on windows there is no linker driver, but only the actual linker link.exe.
But adding patches into clang should be not that hard. I know that there is no autoconf in cargo. But checking whether cc supports IDK -clang-supports-rust
would be nice.
I had a thought on how to deal with this problem: have a fixed set of "magic linking tasks" for which we allow rustc to invoke the linker arbitrarily in ways that may change from compiler release to compiler release. For instance, one magic linking task could be "linking a final Rust executable"; another might be "linking with libstd". We specify the list of magic linking tasks in the RFC. Then we add two new ways to invoke rustc: (1) to perform one of the magic linking tasks (this is similar to @bjorn3's suggestion) or (2) to print the commands rustc would have executed to perform one of the magic linking tasks. Option (1) allows build systems that can delegate their link steps to rustc to do so in a supported, stable way. Option (2) allows build systems like Buck and Bazel that want to duplicate rustc's logic to do so, while at least being able to catch changes between compiler versions in CI so that the build logic can be updated.
Another possibility, which seems preferable if we can make it work, is to have rustc able to emit libstd as a staticlib that's just linked in as a blob. In fact, I guess you could do this by compiling an empty lib.rs
with --crate-type=staticlib
? Then we can just say that libstd and its dependencies are incompatible with v0 rlibs and you should use them as a staticlib, and only stabilize v0 rlibs for "user" crates.
I also think it would probably be OK to require rustc
to be the final linker for Rust executables where main
is in Rust, since Rust targets have special rules in Bazel/Buck and so it wouldn't be hard or disruptive to put special logic at the final link step there. The concerns around localizing Rust-specific linker magic are primarily about integrating scattered pieces of Rust code into a large C++ binary.
I feel like the best way to address this kind of collision would be to define the language property (ie regardness of linking or the rlib
format) of globally unique items. This property is given to items that may exist exactly once or at maximum once in the crate tree and somehow behave similar to lang items. This draft can then reference those rather them language items, this also has the benefit of making the mechanism more future proven.
I also think it would probably be OK to require
rustc
to be the final linker for Rust executables ...
We'd like to use common linker infrastructure also in this case. If we use rustc as the linker for rust binaries, we'd have to implement and maintain the special rules that the common linking infrastructure knows about other languages into the custom final rustc link step (and we are still running into and addressing various corner cases with this approach in Bazel rust rules), and we'd have to teach the common linker infrastructure about rust to be able to link binaries in other languages that contain rust dependencies.
In Bazel we've been experimenting with the following model: we first build an .o file out of the main rust sources via rustc --emit=obj
, which we then pass to the common linker, along with the rlibs and native dependencies. In this case the --emit=obj
step feels like a rustc "pre-link" step (and that still requires the rlibs of dependencies to be usable as common linker inputs).
Could you expand a bit on 2) ?
At what point during the build would those printed commands come into play? Could one execute them before a build happens and check them in next to the Rust toolchain, so that from the perspective of a non-Rust Bazel/Buck ruleset they are just .o
files that somehow made their way to the linking command line?
It feels weird that we are discussing about symbol exports while the proposed format here is archive. Archive does not have a way to specify which symbol is exported.
I feel a better alternative for the proposed solution that we stablise an archive format would be to add a new crate type that emits an object file instead (either compiled with 1 CGU, or multiple CGUs linked with -r
).
See
This is discussed in "Alternatives".
I don't see this discussed in the alternatives. Adding a new crate type that generates object file is different from --emit=obj
, as we can specify/control the format of the outputted object file. The drawbacks listed doesn't apply:
With this approach, there is no obvious place for the Rust compiler to emit metadata (
.rmeta
files). Without metadata, the crate would no longer be linkable from Rust, only from C or C++, meaning that a library meant to be used from both C/C++ and Rust would need to be built twice.
The metadata could be embedded as a debug section in the object file. In fact, this is how rmeta file works today. There is no reason that it must be a separate object file in archive, rather than just a section in the object file.
Finally, this would have the same problem as
staticlib-nobundle
in that if both Rust and C++ link to the same crate, duplicate symbol errors would result, as Rust would be linking to anrlib
and C++ would be linking to an object file with the same symbols.
This wouldn't be an issue if none of the linking is done by Rust (i.e. all Rust crates are object files and the final linking is done by external linker).
Unfortunately, this was tried early on in Rust's development and it was discovered that
ld -r
is often poorly supported by OS toolchains on account of how seldom the feature is used.
This feature is very frequently used in Linux kernel for linking kernel modules. If it doesn't work well in some platforms, we could just fallback to single CGU for these platforms.

This wouldn't be an issue if none of the linking is done by Rust (i.e. all Rust crates are object files and the final linking is done by external linker).
It's not just the final link: how would rustc find the metadata for dependent Rust .o
files? Would rustc need to learn how to process both .rlib
and .o
for Rust dependencies? Or are you proposing getting rid of the .rlib
format altogether? Remember, one of the goals of the proposal is not to compile crates twice, once for use by Rust and once for use by C++, and that requires that Rust be able to read crate metadata for dependent crates.

This feature is very frequently used in Linux kernel for linking kernel modules. If it doesn't work well in some platforms, we could just fallback to single CGU for these platforms.
I'd prefer not to give non-Linux users a worse experience. iOS is an platform of interest for us, for example.