Dynamically Linking Rust Crates to Rust Crates

Hi everybody,

I've been trying to get to a point where I can dynamically link a Rust crate seamlessly with another Rust crate from a potentially different version of Rust, but it seems like it is currently impossible. I want to get some feedback on whether or not there is a way to accomplish what I am trying to, or else find out what is necessary to get dynamic linking in Rust if there is not.

Note: I do not have deep understandings of either the Rust compiler or Assembly and the way that linking works under-the-hood. I'm doing my best to figure out how things work as I try to solve my problem.

If there is a way to do this already, then this topic should have gone on the Rust users forum, but I've done a lot of looking into this and I'm pretty sure that there isn't a way to do this without compiler changes, so I'm putting this topic here.

The Problem

Dynamically linking Rust crates to other Rust crates over different versions of Rust is unsupported because of the unstable Rust ABI. Dynamic linking is supported to C libraries because the C ABI is stable, but this doesn't present a feasible way to link Rust crates to each-other without having to manually define an interface over which they can communicate.

There is a crate, abi_stable_crates, that allows dynamically linking other Rust crates like a kind of plugin, but it requires manually specifying the interface over which the crates can communicate and does not allow the plugin crate full access to the public API of the application crate.

My goal is to allow you to dynamically link to a crate without there being extra limitations or manual effort required to use the crate's public API like you would if it were statically linked.

Use Case

To help you understand why I want to do this, my specific use-case is to compile a game engine core, like Amethyst, to a shared library and to dynamically link game code and game mods to the engine. This has the following purposes:

  1. To drastically improve the game compilation time so that you can start multiple game projects quickly without having to re-compile the 380+ crates that are exactly the same for the different games and mods.
  2. To enable you to use the game engine core as a library that can be linked into a native Python extension for a Blender plugin.
  3. To prevent using exorbitant amounts of disc space by duplicating the game engine core inside of mods and the Blender plugin.
  4. To allow game mods to have full access to all of the structs, traits, functions, etc., that are in the game core so that you do not have to provide a special or alternative API which they must use to extend the game.

The full architecture that I'm trying to achieve can be found here. The advantages of dynamic linking in my particular use-case are impossible to provide through any other means that I can think of.

How to Create a Stable ABI?

Using the C ABI

One thought I had was to have a new Rust library type like rdylib where Rust will automatically create extern "C" functions for all of your Rust functions, and provides a way to import the crate like extern "C" crate crate_name that would automatically create extern blocks so that you could use those functions. I don't know that this would be a good way to go about it. The advantage would be if you could manage to automate the process without changing rustc through macros or through a compiler plugin, but I don't know that you could make that work or not.

Getting Rust's ABI Stable

If there is no other way to get dynamic linking working, I want to know what it would take to get Rust's own ABI stable. I saw that a new name mangling scheme was recently merged which seems like a large step in the right direction, but what else is required to get the ABI stable and how does the Rust community feel about the possibility?


Have you considered using a format like capnproto/bincode/etc for plug-in interop? You could expose a small messaging API through a C ABI. IIUC, something analogous is used for Chromiums plugins: https://www.chromium.org/developers/design-documents/plugin-architecture.

1 Like

The referenced T-compiler RFC was solely for tooling and debugging purposes. The language team was not involved and the RFC should not be interpreted as a path towards a stable ABI.

The language team, I believe, has been historically reluctant to provide a stable ABI for repr(Rust) and has instead favored opt-in approaches to ABI such as repr(C) and repr(int_type). This remains the case. I do not see us moving towards a stable ABI and we do not have current plans for this. Certainly it is not something to seriously entertain in 2019.

Having additional repr(RustStable) opt-in attributes may be a different thing however that we may or may not want to consider long-term.


@anp I’ve though about it before, but it would make it either more complicated, more limited, or, likely, both. The Amethyst library is built on an ECS where you define your own components and systems to create your game while at the same time utilizing components that come with the engine. Implementing traits for the remote crate and using structs and traits from the remote crate in the local crate could be difficult to emulate over a messaging API and then you might as well create a limited C API exposed manually so that you avoid the network traffic.

@Centril It seems likely that I will have find another way to do this, then. :thinking: Thanks for the input.

See also the Amethyst scripting RFC, which is the named engine’s solution to this exact issue.


@cad97 Thanks for the link! I hadn’t seen that yet. It may not cover all of my use-case for the Blender plugin, but it should cover games and mods. The Amethyst editor infrastructure could cover the Blender plugin, though, so that might be all that I need.

Yeah, this is definitely the tricky bit and this approach may not be good for your case. I'm not sure where your concerns re: network traffic are from, you don't even need to load plugins in a separate process to benefit from a "custom" ABI. That said, this is a bit far afield of your original question.

This is one of the most worrisome things I’ve read in rust forums in a long while, and I’m quite disappointed in this off the cuff response. I hope it was written quickly and isn’t the full story here.

Imho one of the biggest mistakes C++ ever made was not stabilizing its abi; swift just stabilized theirs and is already reaping the benefits, swift system libraries, the swift runtime, swift UI libraries, all dynamically linked and backwards abi compatible.

I don’t particularly like that one random comment in a thread indicates what appears to be a Lang team member suddenly revealing that a stabilized abi doesn’t appear to be on the table at all; I’d like an official RFC if this is case justifying reasons for why this is, and good technical arguments for something so wide reaching, instead of some breadcrumbs in a forum.


Dynamically linking Rust code is one of my dreams. Ideally there should be a rdylib apart from existing cdylib option.

If the Rust team have enough motivation, we would image a repr(CustomABI) option, and the build.rs or something like would need to provide code to handle the custom ABI on the boundary. In this way, it would not be a problem to not having a “stable” Rust ABI because people can define their own and responsible for their own implementations.

I do think that dynamic linking in Rust has a lot of potential and that it would be unfortunate if it were a long way off.

Dynamic linking and long compile times are so far the only pain points I have found so far in this amazing language. Dynamic linking could go a long way, depending on the use-case, for helping the compile times, too.

I know nothing about Dynamic Linking or a stable ABI. However I am on the Cargo team. Most of the compile time benefits can more easily be gotten by using one shared target directory for all projects on the same computer. You can experiment with this behavior on stable by setting the CARGO_TARGET_DIR env. There are UI problems with the current system, mostly that there is no good way to clean thing up, without triggering a full rebuild of all projects on the system.

Fairly recent sommery: Tools Team: tell us your sorrows

Long discussion of possible future directions: [Idea] Cargo Global Binary Cache

Place to start contributing to get things moving: https://github.com/rust-lang/cargo/issues/6978


For “in 2019”, @m4b, that’s absolutely something to be expected based on the published roadmap (https://blog.rust-lang.org/2019/04/23/roadmap.html).

Things like repr(rust) being intentionally-unspecified are also well-known, I thought, as that unspecified-ness has already been relied upon to allow things like struct (and tuple) layout optimizations (http://camlorn.net/posts/April%202017/rust-struct-field-reordering.html).

For functions there have been discussions about potentially special-casing Result returns, for example, to reduce stack usage in the normal case of deep calls returning mostly Oks. That can’t happen if extern "rust" is stabilized.

And, in general, everything about layout is a “what does it mean for unsafe code to be sound?” question, which is an active work topic, so “unless it’s explicitly promised it might not be sound” is the default answer.

So I don’t see how “suddenly revealing” is at all true here.

If you want to participate in discussions, you can join threads like the following one, which is about whether repr(rust) should promise something in homogeneous structs. Maybe there will be a guarantee one day; maybe there won’t.


I am not sure why this is a problem,regular libraries don't have access to the API of their dependent crates.

The way a plugin would communicate with an application is by having some interface crate that defines the public API of the application(to dynamic libraries). I will later write an example of this using #[sabi_trait] derived trait objects,once I finish the first version of them.

As a general note, we do not use RFCs to specify what we do not want to do, that's just not how our process works. You'd have to actively propose a stable ABI for us to give justifications on an RFC.

An unstable ABI has many benefits optimization-wise that might not be worth disabling by default.

You might want to explore whether a global flag that allows you to control the ABI of repr(Rust) types to, e.g., change that globally to repr(C), would solve your use case. Something like -C repr-rust=C. If you were to compile all your code with that, then you might be able to call it as if it were C, and it wouldn’t matter whether you are calling it from Rust, or from C, or some other language.

That would require mapping the repr(Rust) ABI of all types to C, and setting that mapping in stone. We could, e.g., “version” this mapping, e.g., -C repr-rust=C-1.0 to allow us to make changes to it in the future.

This would incur a cost everywhere (e.g. in terms of missed optimizations), as opposed to with using repr(C), where you only pay for it when you need it (e.g. in your libraries FFI boundary).

EDIT: now that I think of it, -C repr-rust=C-1.0 is not that different from -C repr-rust=stable-1.0, that is, we could have an opt-in versioned Rust ABI that would need to be selected globally. How similar that ABI is to C would kind of be up to us, and a repr(Rust(stable = "1.0")) kind of attribute could be used to control that in a finer-grained way.


Considering the “rust dynamic library” use case, it seems like the goal is to be able to define a “library interface” consisting of public functions and public types exported by the library.

To me, a good solution for such libraries would:

  1. Apply automatically to public items of the library, when compiled as a “rdylib”
  2. Apply to the types of the dependency crates if they are reexported
  3. Apply to std types
  4. Not necessarily apply to types of internal items

While the solution of changing all types in the compiled library with a compilation flag obviously addresses (1), it is not clear to me if it addresses (2) and (3), and it clearly doesn’t cover (4).

I wonder if we could allow applying #[repr(C)] attribute to any existing type such that #[repr(_)] T would reexport any T while deeply changing its internal representation to #[repr(C)]. This is different from applying #[repr(C)] to a newtype #[repr(C)] struct Foo(Bar); as, if I’m not mistaken, in the latter struct declaration we don’t change the representation of Bar inside of Foo.

This would allow us to e.g. declare function like the following:

pub fn foo(x: #[repr(C)] Vec<u8>) -> #[repr(C)] Bar { /* ... */ }

foo is a publicly-exported function that takes a Vec<u8> changed to have a C repr and returns a Bar custom defined library type changed to have C repr.

The downside to changing repr deeply rather than only for the external type is that we can’t move, copy, or transmute from T to #[repr(C)] T. We could allow this with a special mem function to_c_repr that would basically have the following signature:

fn to_c_repr<T>(this: T) -> #[repr(C)] T { /* would need to be implemented using compiler intrisics, I presume */ }

Of course, calling this function would be costlier than a memcopy of a normal move, since we would have to recursively call to_c_repr on all fields and subfields of T.

Then, we could have one of the following for rdylibs:

  1. Allow to add an #[export] attribute to functions and type in rdylib such that it adds the various [#repr©] on the arguments and return types automatically, or
  2. Make that transformation implicitly for pub functions and types in rdylib (note that for types, we don’t need to change their own representation, only the parameters and return types of their methods)

This is just considering the use case of “rdynlib”, not the use case of making assumptions on the layout.

With this solution, I feel like the problem of introducing a #[repr(Rustv1)] representation that would make the layout fixed while retaining some of the rust layout optimizations is orthogonal.


I was referring to this:

I do not see us moving towards a stable ABI and we do not have current plans for this

You never responded to the question I explicitly asked about your comment about never stabilizing it. Please do.

As a meta side note I find a statement claiming an ABI may never be stabilized along with the claim we won’t provide you an RFC detailing why we’ll never do this, but you must submit a full stable ABI proposal (which would presumably be rejected given the first) somewhat unusual.

If the Lang team truly does never intend to stabilize the abi, why would I or anyone ever bother to write up such a complex document?

I understand the RFC process is positive information but in this case I think it might be reasonable in a case as far reaching as a stable ABI to give some initial signals to the community on that front; specifying an ABI is a large undertaking and if it’s just going to be closed because there’s no intention of stabilizing it in the first place this seems like a pretty big waste of everyone’s resources


Nobody should lock themselves up in a room for six weeks and emerge with a finished proposal for any big topic. That's asking for lots of wasted effort with any topic, even when the general direction of the proposal is uncontroversial. Discuss and gather feedback early and often, through informal chats (the #design channel on Discord is often full of those), longer discussion threads in this forum, pre-RFCs, smaller RFCs for subsets of the problems, and probably other avenues I'm forgetting right now.

On the specific matter of ABI stability, the numerous and overwhelming arguments against it will crop up in the early stages (some are have already been brought up in this thread, where they're tangential to the main topic) and as with any proposal it's the responsibility of the proposal authors to take them into account when deciding how to move forward (e.g., before opening an RFC).

Also: quite a few ABI rules are being discussed and attempted to be settled in the unsafe code guidelines working group. This is the right location for proposing specification of any part of "the ABI", and such incremental work (feature by feature, first collecting existing guarantees and extending them in non-controversial ways) is a smarter strategy than trying to jump straight to "full ABI stability" even if the latter were a goal.


Because part of the document is a motivation section, and "current plans" are subject to change given sufficiently-good motivation.

Also, nobody should ever be writing a massive RFC out of nowhere. The process suggests posting things here first to get temperature so that one can find out these things. And as I've said before, this is by no means secret information -- there are plenty of open issues talking about potential improvements that require things not being locked. And plenty of accepted RFCs -- like repr(transparent) -- that have talked about how repr (rust) doesn't guarantee things.

Also, remember that not guaranteeing something is the default position, and not just of the lang team. That's why there are things like the following libs PR when someone wants to rely on a current implementation detail: