"Generic" externs for high-level FFI

This code is currently rejected:

extern "C" {
  fn foo<T>(x: T);
}

This makes sense: without having access to the body of foo, the compiler cannot monomorphize the generic at compile time.

However, there are a number of reasons you might want to call a generic function external to Rust (key among them FFI with higher-level languages like C++). This cannot be made to work with Rust's current instantiation paradigm.

Link-time monomorphization

All modern C++ compilers implement templates the same way we implement generics: when compiling a translation unit (i.e., a crate codegen unit, for us), the template is used to generate a concrete function which is marked as weak symbol (or COMDAT, depends on the linker). At link time, exactly one implementation is linked into the binary, since they're all identical.

However, some less popular/older compilers did it differently: they would emit relocations for the mangled symbols of the concrete functions, but only instantiate them at link time. This has a large number of downsides, including that a lot of diagnostics might be put off until link time, rather than compilation of a callsite. Yikes!

However, this gives us a way we could make "generic externs" work. When a generic extern-declared function is called, the compiler emits a relocation for it, and places its name somewhere (e.g. in an object section.rust.missing_monos, or whatever). It would be the responsibility of the user to look in this somewhere, parse the names, and then go and perform monomorphization manually, generating object files that contain the requested symbols. For example, in the C++ case, this might mean shelling out to clang to trigger template instantiation. This is similar to how you must provide object files containing extern'ed symbols at link time.

The nitty-gritty

There's a lot of moving pieces to what is a pretty controversial idea. Here's how I see this implemented in practice:

  • extern block fns may have type and const parameters in addition to lifetime parameters, and their where clauses may include these parameters.
  • When such a function is called, the compiler will perform the usual analysis against its signature (checking bounds, expanding associated types, etc). However, rather then instantiating a generic template, it simply emits a relocation to the mangled name. Such monomorphizations are deferred.
  • Upon completion of emitting the .rlib, rustc emits further data (be it in the .rlib with a sanctioned way of extracting it, or in a sidecar file) that describes monomorphizations that were deferred. Each monomorphization is two parts:
    • Some machine-readable description of the parameters. We'd need to come up with some mini-language to describe things like Foo<i32, false, Bar { x: -1 }>. This may become a problem when closure types or dyns get involved.
    • A linker symbol, which is the symbol in the relocation. This will normally be the mangled name, but its contents are formally unspecified. Tooling external to rustc should not have to parse mangled names.

At link time, the caller must provide .a files that include the requested symbols. Failure to do so is an ungraceful link error. Tooling could read the list of deferred monomorphizations and produce compatible Rust or (or another language) code that would compile to objects with the correct symbols.

Why Bother?

The main issue is that there is no way to write a generic function in Rust that calls out into external code (well, in a way that couldn't be achieved with a normal function). This is a problem for doing seamless interop with languages that have Rust-like monomorphization, even though it's otherwise completely feasible (with post-monomorphization errors, but that's a given with anything that involves externs).

There are a lot of problems with this approach. The top ones that come to mind are:

  • Designing the machine-readable mini-language of type parameters. We should not stabilize mangling to make this work.
  • We're pretty screwed for closure types. I don't think there's a way to make this work.

I'm sure others will point out other technical issues. I don't think this approach adds any new problems that weren't already present with the current way we link foreign code (e.g. cc), except that Cargo build scripts don't have a good way of getting this information. This seems orthogonal, and something that can potentially be sorted out with a "post-compile" or "link-time" version of build.rs.

4 Likes

I think the first step to be made here is to develop a viable cross-language ABI that includes vtables. Something that works for both Rust traits and C++ objects.

Once you have that in place, we can make polymorphized versions of generic functions.

2 Likes

...and if you do that you can describe the whole extern fn signature, right?
so for example you can invoke an overloaded fn

not to hijack, but been thinking about compiled code specifying fn-s it uses like that

That's different though; this is about monomorphization. Polymorphism only works in a very select set of cases, and would not work for, say, calling std::vector::push_back from Rust. Polymorphism also isn't zero-cost, which is supposed to be a selling-point of both languages. If you're going to go to the trouble of porting move constructors to avoid boxing, this seems like the wrong approach.

Not to mention that you can't really make Rust trait objects work with C++'s belief that sizeof(Base*) == sizeof(Derived*), or that sizeof and alignof are not polymorphic (unlike in Rust, i.e., size_of_val and align_of_val).

Yes, you could use this to implement overloading, but that's no different than implementing overloading via traits.

I've definitely wanted something like this when doing C++/Rust integration work on Stylo; we ended up having a toml file which drove a python script generating the inputs and invocation of bindgen, which wasn't ideal.

It's one of those things that has many really terrible ways of working around, and nothing good. I've been dropping hints about such a design for ages but not putting in the work for a more concrete design, thanks @mcy for actually fleshing it out!

Given that cxx provides type-generic FFI for a few types, I'd like to see a comparison to cxx's approach, just for completion.

cxx's bindings
  • &[T]Rust <=> rust::Slice<const T>C++
  • &mut [T]Rust <=> rust::Slice<T>C++
  • Box<T>Rust <=> rust::Box<T>C++
  • Vec<T>Rust <=> rust::Vec<T>C++
  • fn (T, U) -> VRust => rust::Fn<V(T, U)>C++
  • [T; N]Rust <=> std::array<T, N>C++
  • std::unique_ptr<T>C++ <=> cxx::UniquePtr<T>Rust
  • std::shared_ptr<T>C++ <=> cxx::SharedPtr<T>Rust
  • std::vector<T>C++ <=> cxx::Vector<T>Rust

I believe cxx works exclusively via type erasure on the FFI boundary, but it's worth an explicit comparison.

2 Likes

This all sounds great and I'm excited to see work on this problem, but there is one thing that came to mind that wasn't mentioned:

How is the mangling going to work with multiple crates of the same name, but different versions? That is, if I export using somedep-1.0 and update to somedep-2.0, am I breaking the ABI of my exported interface by doing this? Am I also breaking the API? If it does, that seems to imply that the calling code needs to know the version of the crate exporting symbols I expose as well. If it doesn't break either, how is the new frobnitz field in that exported type going to work out?

Repeat the process for builds with different feature flags enabled if that wasn't hard enough :slight_smile: .

The proposal seems to be about invoking C++ from Rust.
Isn't your concern related to invoking Rust from C++?

Hmm, yeah. But I still see the same problem. If I want to call cxxapi::generic::<library_type>(), how is the C++ library going to implement it for library_type from crate version 1.0 or 2.0 reliably (or with and without the frob feature that changes the type's size with some private member)? The name mangling depends on that, doesn't it? If it doesn't…what does the C++ on the other side look like then? Are only a blessed subset of types where some ABI layout guarantees can be made allowed through (mostly from core and std I imagine)? Are we getting a stable ABI layout with this? :wink:

cxx does all of this manually, and doesn't give you an out for instantiation to fail. For example, you cannot write a user-defined C++ function template that expects to be able to call a function Foo() or whatever. Mind, this doesn't fit into Rust's ordinary generics model, which is fine, because this feature kind of explicitly breaks it.

Like, if you give me text I can add it, but it's solving different problems.

Versions and feature flags are a Cargo fiction; they don't exist at the language level. cfgs exist, which are a more general form of features. There are some ways in which Cargo threads version-specific diversification values so that the symbols are different (-Cmetadata is the big one), but the tooling that consumes the monomorphization list would need to discover this through the same side-channel that invokes it. In theory Cargo could do this, but that is a "solveable" problem that is just a matter of adding more build.rs-like things; after all, we need to use hacks to use normal extern blocks with non-shared library code.

If I'm using something like Bazel or something that outputs Ninja, I probably won't have Cargo features or versions (for first-party code, at least).

You (the person building interop tooling) have to make one up that phrases things in terms of the system C calling convention. Neither Rust or C++ have stable calling conventions, and specifying an ABI is really, really hard (see the Itanium C++ docs). (GCC-descended C++ toolchains pretend they do, but this explodes if you mix C++ versions.)

C++ actually has this exact feature already: it's called declaring a function template and not defining it: Compiler Explorer

template <typename T>
T Foo();

void Bar() {
    int x = Foo<int>();
}

This code compiles and expects the caller to provide the mangled name starting with _ZN3Foo... somewhere.

1 Like

This would be a really powerful tool to have!

Fleshing this bit out more- I'm imagining a nice way to use this would be via a bindgen-like tool that takes the Rust-side arguments, materializes them (and perhaps declarations of their methods) as ABI-compatible C++ types, and then generates explicit instantiations and aliases the mangled names to them.

The first half of this is very close to what cbindgen does today- take the generic arguments in this mini-language and re-declare them in another language. But since the information is generated by rustc rather than being scraped from the source... maybe there is a way to make closures work, and also simplify(?) some other stuff.

We'd need a #[repr(C)] layout and an extern "C" call operator. For hand-written types, we typically just require the user to write things this way if they want to use them with (c)bindgen. But maybe here the use of a "generic extern" is enough. Rustc could include, in the mini-language:

  • A #[repr(C)]-compatible version of the closure's layout. Perhaps a size and alignment is enough? Maybe Copy-ness and/or drop glue if we're feeling fancy (or see below)?
  • An extern "C" signature and symbol for (an exported wrapper of) <{closure} as Fn>::call. Maybe the generic extern's bounds could drive this, to make it work with arbitrary traits. (Or, at least arbitrary traits whose instantiated method signatures would be valid with extern "C"?)

I can also see this working in the other direction! That is, not only could a tool generate the C++-side declarations and template instantiations for a Rust-side generic extern, but it could also take C++-side template functions and generate Rust-side generic externs for (appropriately-adapted, a la cxx?) versions of them.

(Extra moonshot/cursed version of that last thing- generate a Rust trait and some glue for any ADL functions used by such a C++ template function, so Rust can use idiomatic <algorithms>/<ranges>/Stepanov style C++ almost unmodified?)

This is precisely what I had in mind. Thanks for pointing it out. =)

1 Like