[Idea] Type erasure in Rust through parametricity

Summary: This post proposes a mechanism for type-erased generics by restoring parametricity of types in a limited fashion.

Rust’s primary mechanism for generics is through monomorphization, following the same path as C++. This is generally good for performance and optimization, but can be bad for binary size, but more importantly it does not offer the full expressiveness of higher-rank types (not to be confused with higher-kinded types nor higher-rank trait bounds). It’s also worse for compile time.

The lack of expressivity is evidenced by the fact that trait objects do not support generics, since you can’t exactly monomorphize at run time. In a way, monomorphization-based generics simply aren’t “first-class” in Rust: you can’t store them in variables without fixing all the type parameters.

What would be nice is if the compiler could distinguish between generics that need to be monormophized vs generics that are erasable. Consider this generic:

fn swap_refs<'a, T>(x: &mut &'a T, y: &mut &'a T) { std::mem::swap(x, y) }

Monomorphizing this kind of function is quite wasteful – the representation of references (or pointers) is independent of what they point to, so all specializations are identical in implementation. But it’s not sufficient to only look at the type signature of the function itself, because types in Rust are not parametric with the introduction of specializations. The debugit crate exemplifies this.

To make this idea work, it is necessary to have an explicit way to mark a parameter as parametric. One approach would be to introduce a new OIBIT named, say, Concrete. By default all type parameters today are implicitly Concrete, but one can opt out by writing T: ?Concrete. Intuitively, Concrete type is one that is known at compile time and can thus be monomorphized.

To preserve parametricity, the compiler must forbid usage of trait methods that could possibly be specialized if the type is ?Concrete.

Every variable must have a Concrete type, and every function must have Concrete arguments and Concrete return values. For most data types, we expect an implicit rule like:

impl<T: Concrete + ?Sized> Concrete for MyStruct<T> {}

The exceptions to this rule should be the pointer-like types, &T, &mut T, *const T, and *mut T, as well as aggregates of such. DSTs complicate this idea somewhat: the representation of pointers depends on whether the pointee type is Sized. Therefore, we are forced to limit ourselves to Sized types only:

impl<'a, T: ?Concrete> Concrete for &'a T {}
impl<'a, T: ?Concrete> Concrete for &'a mut T {}
impl<T: ?Concrete> Concrete for *const T {}
impl<T: ?Concrete> Concrete for *mut T {}
impl<T: ?Concrete> Concrete for Box<T> {}
impl<T: ?Concrete> Concrete for Rc<T> {}
impl<T: ?Concrete> Concrete for Arc<T> {}

Note that std::mem::size_of mustn’t be generalized, because it only makes sense if T: Concrete.

Now, here’s the critical rule for generics: if a generic contains only type parameters that have no bounds other than marker traits (“trivial bounds”), then the function can be type-erased. In other words, ?Concrete type parameters behave just like lifetime parameters today: they are guaranteed to vanish before translation.

For example, this generic can be type-erased because T only has a marker trait bound (Sized):

fn swap_refs<'a, T: ?Concrete>(x: &mut &'a T, y: &mut &'a T) { /* ... */ }

From here, it follows that if a trait has something like this for a method, e.g.

trait Blah {
    fn blah_swap_refs<'a, T: ?Concrete>(&self, x: &mut &'a T, y: &mut &'a T);
}

the generic would no longer cause object-unsafety.

We could push this idea further: the generic swap_refs can be turned into a bona fide function pointer:

let f: for<'a, T: ?Concrete> fn(&mut &'a T, &mut &'a T) = swap_refs;

Here for<'a, T: ?Concrete> does not designate a higher-rank trait bound, but more of a higher-rank type. This shows that type-erased generics can be treated as ordinary objects, just like function pointers.

One peculiar feature that type-erased generics can have is the ability to call them without fixing a particular T. In other words, an example such as the following will not complain with “type annotations needed”,

fn foo<T: ?Concrete>() {}
fn main() { foo(); }

because parametricity ensures that it doesn’t really matter what you T pick.

At this point, I’ve probably relied on the simple swap_refs example a bit too much. You might think “What’s the point of this nonsense? You can only quantify over type parameters with trivial bounds!” To that, I present a more elaborate example:

// if you want more flexibility, you can use Box<Fn> instead of just fn
struct AnimalVtable<T: ?Concrete> {
    new: fn(&'static str) -> FatBox<T>;
    name: fn(&T) -> &'static str;
    clone: fn(&T) -> FatBox<T>;
}

// this function is type-erased despite being a generic!
// be sure to pass &ANIMAL_VTABLE_FOR_SHEEP as argument
fn make_dolly<T: ?Concrete>(animal_vtable: &AnimalVtable<T>) -> (FatBox<T>, FatBox<T>) {
    let dolly1 = animal_vtable.new("Dolly");
    let dolly2 = animal_vtable.clone(dolly1.as_ref());
    assert_eq!(animal_vtable.name(dolly1.as_ref()), animal_vtable.name(dolly2.as_ref()));
    (dolly1, dolly2)
}

struct Sheep { name: &'static str }

impl Sheep {
    fn new(name: &'static str) -> FatBox<Self> {
        Box::new(Sheep { name: name }) as _
    }

    fn name(&self) -> &'static str {
        self.name
    }

    fn clone(&self) -> FatBox<Self> {
        self.into_inner().clone() as _
    }
}

const ANIMAL_VTABLE_FOR_SHEEP: AnimalVtable<Sheep> = AnimalVtable {
    new: Sheep::new,
    name: Sheep::name,
    clone: Sheep::clone,
};

(See this post for more info on FatBox and why is needed.)

You see, you can encode interesting behavior even with trivial bounds like T: ?Concrete. Using Rust today, it’s not possible to write the make_dolly function, at least without compromising type safety and usability.

There’s another use case for type-erased generics: since higher-ranked types are possible for them, it follows that one can implement sound unchecked indexing without using lifetimes as a kludge.

Comments and feedback are highly appreciated (especially if you find a flaw in the logic here).

Edit: Corrected example in response to matklad.

4 Likes

In the make_dolly example, what happens with dolly1 if animal_vtable.clone(&dolly1); panics? Presumably, it should be dropped, so we need to call Ts Drop somehow, and that means that we must know its type?

1 Like

+1 from me. I’m convinced that higher-ranked types would be an expressivity enabler. Another use-case for higher-ranked types I found just the other day, sorry, the introduction is a bit long-winded; bear with me.

In Serde, you have the trait Serializer, whose types supports serializing into a specific data format. Let’s have a concrete type serde_json::Serializer, which implements trait Serializer.

Now, I have a buffer of data I want to serialize into JSON. But it’s just a single location in the code, so I want to do it ad-hoc, just calling the methods of Serializer directly, without my data implementing a trait called Serialize. (Note the difference: Serializer is a trait that supports serializing into a specific output format, like JSON. Serialize is a trait that supports serializing from the implementing Rust data type, like struct MyType.)

I call serialize_seq, that starts serializing a sequence. Next, I’m supposed to call serialize_element for each element, but here’s the problem: it takes a value which implement the Serialize trait, and calls a serialize method of that value, passing it an inner Serializer. So I can’t call the Serializer methods to serialize my elements without passing the control to Serde, and I have to implement a new type even for my ad-hoc case.

The point: Implementing traits for newtype’d closures

Here’s something that I tried: implement the ad-hoc serializing logic of the elements in a closure. Then define a newtype, that is going to contain that closure. Implement Serialize for that newtype in a way that simply calls the closure. Then you can pass the newtype in serialize_element. Here’s the problem:

struct SerHelper<F/* Possible : ?Bounds enabled by this idea? */> {
    closure: F
}

impl Serialize<F> for SerHelper<F>
   where for<S: Serializer> F: FnMut(S) // Note! This isn't allowed ATM.
{
    fn serialize<S: Serializer>(&self, serializer: S) {
        self.closure(serializer)
    }
}

...

        // Ad-hoc serializing in my code
        let mut seq = serializer.serialize_seq();
        for row in rows {
            seq.serialize_element( SerHelper { closure: |serializer| {
                // ... serializing logic for element 
                // that calls serializer.serialize_stuff()
            })?;
        }
        seq.end()

…you need to have not only higher-ranked type bound in the implementation for the closure to be able to support any Serializer that’s passed to it, but also Rust needs to be able to reason what types are safe to have higher-ranked type bounds! Here, the closure clearly is, since the stored environment itself isn’t generic, just the arguments are, so it gets fully erased compile-time.

Also, for reference, an earlier discussion of a related thing: https://github.com/rust-lang/rfcs/pull/1650

After reading the first post more carefully again, I realized that it’s entirely different from what I thought at first, but it’s an intriguing idea. I first hand-waved through it and thought it would allow representing more generally higher-ranked types with monomorphisation at usage site as demonstrated in my post, but the difference is that OP’s idea allows specifying types that don’t need monomorphisation to be polymorphic.

(Edit: To be sure, I was thinking of types that don’t need monomorphised memory representations, but that would still need monomorphised code paths on usage. This would allow generic closures.)

Just because rustc uses monomorphisation to implement generics, doesn’t mean it can’t share implementations whenever types are truly parametric: it can do that without any language level changes, purely as an optimisation pass, because it can just look at what the actual concrete types are.

In fact, when using the MSVC linker, I think it may already merge identical functions? Regardless, it would certainly be nice to have that optimisation at the MIR level instead, which could save a huge amount of compilation time.

What you’re suggesting here is a bit different, and attempts to solve the issues with combining explicit virtual dispatch with monomorphisation. The problem is that you end up in a situation where you have an effectively unbounded set of vtables that you might need at runtime, and when these types get transferred across crate boundaries, there may not be any one crate capable of generating a particular vtable which is then required at runtime. In your example you solve it by having to explicitly write out the vtables you are going to use, but this would be extremely tedious in practice.

You can make this more efficient with macros, by requiring that the user declare up-front the list of traits that each type implements, in which case you might end up with something like this: https://crates.io/crates/query_interface

That’s a very good point. This means he current implementation of Box<T> cannot support T: ?Concrete because Drop requires it. To work around this:

  • One could have a Fat<T> trait object that acts as a wrapper for T. It would store the destructor in the vtable and can therefore be dropped without knowledge of T. Then you’d just use Box<Fat<T>> instead of Box<T>.

    trait Fat<T: ?Concrete + ?Sized> {
        fn into_inner(self: Box<Self>) -> Box<T>;
        fn as_ref(&self) -> &T;
        fn as_mut(&mut self) -> &mut T;
    }
    
    impl<T> Fat<T> for T {
        fn into_inner(self: Box<Self>) -> Box<T> { self }
        fn as_ref(&self) -> &T { self }
        fn as_mut(&mut self) -> &mut T { self }
    }
    
  • Alternatively, one could add native support for passing in vtables and have Rust leverage those whenever destructors are needed. This is likely a lot more involved than it sounds.

This is a proposal for guaranteed type erasure. This is important: it’s not merely an optimization: if you can guarantee type erasure it opens the possibility for things that aren’t even expressible in Rust today – generics in trait objects being the most obvious example, higher-ranked function pointers being another.

The name “vtable” here is only to help the reader understand what’s going on, not to suggest any kind of automatic generation of vtables. They are definitely not traits, just plain old data structures that you can carry around and do whatever you please. In fact this offers more flexibility than mere traits, at the cost of being more awkward to use.

At this point, it’s quite moot to talk about ergonomics since none of this is even expressible in Rust. The proposal strives to make as few changes to the existing language as possible.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.