Shared library for faster builds (research)

Hello there good day! I was thinking to myself for a little while that maybe one of the ways to increase build performance in rustc might be to compile all the dependencies as a shared library visible only to rust (let's call it for example And then, instead of linking all the code and it's dependencies every time we build the project, we would only build the code that changed and link against the library which would contain all the dependencies.

This would mean that every time our dependencies change, we would need to build said library again. But if the dependencies are the same, we increase the performance of the existing builds because (from my understanding) dynamic linking to one library every time would be faster than static linking all code together.

Thoughts on this?

I presume you propose this only for development, and not for release (as statically-linked release artefacts may be required for users)? However, for development, rustc already undertakes incremental compilation by default whereby it only compiles that which has changed—even within the end crate, let alone any dependencies.


yes, only for debug builds. I understand the incremental builds, but wouldn't it be a small performance boost to have it also dynamic link, specifically in a project with 500+ dependencies?

I can say from experience that changing the default linker to llvm ld was already a huge quality of life improvement. Wouldn't it also be a small speed-up to do dynamic linking?

1 Like

Fair point, but in my own personal experience I relatively rarely build the project and more often just check it (which stops before code generation and linking); granted, when building it could reduce the amount of work that the linker need undertake at compile-time, but isn't the trade-off that the loader will just have to undertake such work at runtime? And if one runs the built artefact more than once, that cost will then be duplicated on each occasion.

In any event, projects can essentially opt into what you propose by having a single dylib dependency that reexports all other dependencies (I believe that Bevy does something like this).


Bevy does this already to reduce the time it takes to link. In fact doing this was my first Bevy contribution:

True, but this is a lot less than the regular linker needs to do. Most relocations are already fixed by the linker when linking the dynamic library. Only relocations in the data section (which need to be done anyway if you build a PIE executable) and GOT relocations for (potential) references to other DSOs (dynamic libraries and the main executable) still need to be done by the dynamic linker at runtime. All references between functions within the same DSO are fixed as PC-relative references.


Fair point! I think I was not considering the compile time and forgot about the runtime.

Hm, this goes outside my knowledge base, could you elaborate a little more? Are you saying that for large projects, doing the shared library will have significant impact?

If you have a lot of dependencies, putting them all into a single dynamic library can reduce linker time. Whether it actually helps a noticeable amount depends on how much time linking took in the first place. If most time is spent in codegen it won't help much.

1 Like

Most of the time in my current project is spent on linking, I'll do a test with a separate crate and share the results

Make sure to use my_dylib as _; or otherwise reference the dylib from your main executable. Otherwise rustc will ignore it and thus statically link all your dependencies anyway.

Make sense