Trait-based enum Variants

Trait-based enum Variants

I think allowing Trait implementations to be passed around like manually defined enums would be a useful feature and fits really well into Rusts core principles of Zero cost abstractions and allowing the developer to choose what they want to use and how they want to store/pass around data. This would mean better cache locality, faster method calls (just choosing based on the enum) and potentially even inlining and other optimizations. Manually defining enums isn't always possible or ergonomic (see below) and we have to fall back to dynamic dispatch, while also loosing the ability to match on those (non-exhaustive) enums.


Add additional syntax, another keyword (or use enum), a macro-like annotation or similar to a trait or Type (see examples below), which indicates to the compiler that this shouldn't be a impl MyTrait or dyn MyTrait but that the compiler should (when compiling the final binary or dynamic/static library (basically when no new types can be added) create a new enum containing one Variant for every type implementing MyTrait and use that, with an implementation of MyTrait calling the underlying variants Trait implementation (similar to how a dyn MyTrait works from the callers perspective).

This enum type would be sized if the MyTrait is sized, would have a size known at compile time.

Examples and Potential syntax

trait MyTrait {
    fn hello();

impl MyTrait for MyStruct{ /*...*/ }

fn example1(data: enum MyTrait) {
    data.hello() // Forwarded to the respective variant
fn example2() -> enum MyTrait {
    // Creation can be like any other struct is defined, automatically getting converted to the respective Variant of the (anonymous) enum
fn example3(data: enum MyTrait) {
    match data {
        MyStruct => todo!(),
        String => todo!(),
        MoreComplexType{a, b, c} => todo!(),
        CompelxTypeWithGeneric<String>{s, ...} => todo!(), // This could be the same syntax as when specifying generics for a function
        CompelxTypeWithGeneric<_>{a,b,c,..} => todo!(), // Ignore the type
        // By its nature every dependency with access to MyTrait can add new types and thus variants, so this must be non-exhaustive everywhere (except for in binary crates)
        _ => todo!(),
fn example4(data: &mut enum MyTrait);

struct {
    data: enum MyTrait,
    // Example of dyn (I don't know of the top of my head if Box is needed if MyTrait itself is sized)
    dynamic: Box<dyn MyTrait>

Open Questions

  • It might be possible that the creation of a new enum type (or its size or similar) could lead to other types having a valid Trait implementation and thus needing to be added to another such Auto-Enum
  • The compiler might (internally) have some issues with types where their size and potentially alignment depend on other types it might not have seen, yet.
  • Is it possible to automatically choose the discriminant size (unless specified)?


Let's say we have a library or framework, which contains a (sized) Trait for some kind of data that can exist in multiple forms. Normally you'd use an enum for this, but the library/framework should be generic and work with new types created by user (binary or other library crate). Alternatively we could use Box<dyn MyTrait>, paying the cost of an indirection and at the very least a pointer of 4 or 8 bytes, in addition to a second pointer for the underlying type and its methods. For most cases this is absolutely fine, but it does have a bad feeling to it - wasting those 4-8 bytes + type information where a single u8 would be sufficient if it was an enum.

We can also use Generics (or GATs if we have too many) to let the user define an enum in his crate, which he can pass in in the type or function call. This achieves the goal of being efficient in terms of memory size and potentially allows more optimizations than Box<dyn MyTrait>. It does however have four downsides:

  • The user has to define this enum - This could be solved or made easier by providing a macro doing this
  • The user has to specify this enum type wherever he wants to use the library/framework and the compiler can't infer it from other places, thus leading to some boilerplate code, both in the library and in the users crate - It isn't really ergonomic but doable
  • Let's say the user isn't the only one using this library, he is providing another library (think about the diamond pattern in multi-inheritance), where a third party uses both libraries. Let's say the second library requires some variants on this type to exist for it to even function. Now the second library would need to provide those generics, too and the outer-most user will need to write an enum satisfying all the dependencies, while also potentially having to deal with conflicting variant names.
  • If the library depends on one variant being available and doesn't want to require yet another function on the enum just for creating his enum variants, as he cannot do anything with the enum directly since he isn't in control of it.

So we currently have the following options (unless I'm missing something):

  • Box<dyn MyTrait>: Performance and memory size downside, straying from the Zero Cost Abstractions Rust is famous for.
  • Enum in library: Not extendable, also see for a similar feature request/question Do we have any proposals on extensible enums?, doing this with extendable enums could be possible, too
  • Generic for this type everywhere (or bundled into a single trait with GATs): Some boilerplate and the above mentioned problem of not being able to use it as an enum (since it could also be a struct), thus this is basically mainly useful for stroring some user data

A more specific example

A library that works with multiple database backends (granted, the performance overhead of Box<dyn MyTrait> isn't too big in this example, but this could also be something that's called a lot more often). Each backend provides a data type the library has to store and work on, all of which implement a trait defined by some generic database crate. Without any of the backends knowing about this feature, the library could provide an interface using an enum of all possible backend implementations and use that enum type (and the methods defined on the trait) internally, potentially even without the user of the library noticing. While at the same time not needing to use dynamic dispatch (as it's an enum). A user could use this libraries functionality with a custom/new database backend without the library providing a feature flag for (or even knowing about) this backend, still without requiring dynamic dispatch.

Similar Questions I've found

  • Representing closed trait objects as enums - #19 by Ixrec - While this couldn't be used for automatic creation of the return types (due to needing a Trait), doing it this way could also solve some of the issues with breaking changes due to adding an additional (previously non-existing return type).
  • Ideas around anonymous enum types - #2 by Ixrec - By defining all variants of this "anonymous enum" (trait-based enum) by some type implementing a trait, the name of this type (or rather it's type/path) can be used for pattern matching, similar to how they're used in destructuring structs in patterns. @Ixrec, you might be interested in this (sorry for the ping). With a private trait it would basically allow anonymous enums as linked above, although requiring an additional Trait and a few impl blocks. It does however not completely get rid of the pattern matching issue, for example when implementing MyTrait for a non-struct like u64.

I hope I haven't lost you, I'm curious if this would even be possible (probably would require a bunch of internal changes in the compiler since it's potentially adding a new type (or at least type variant) in any impl block.

Please let me know what you think of this idea, and if you could see it being a useful alternative to Box<dyn MyTrait>.


One issue I forsee is that I don't see a good mapping between a traits (possibly generic) associated type, While there is a good mapping between the impl type and an enum variant, there isn't necessarily a good mapping to it's associated types I generally doubt including it as a type parameter for the enum is the right thing (I presume we want such types to be erased for the enum as they would be for dyn). So should probably be mentioned/considered how this feature interacts with traits of that ilk.

The compiler can only generate the enum you want, and determine its memory layout and ABI, once all the implementers are known. So you can never have an enum Trait type that accepts any number of trait implementations from any number of crates.


OP: what you want is something like an InlineBox<dyn Trait, MaxLayout> (and some magic way to coordinate choice of MaxLayout). The cost of dynamic dispatch isn't just the dynamic indirection itself (though that isn't insignificant), but rather the barrier to inlining. For most any non-trivial functionality, you might get a jump table dispatch instead of indirect call when using enum dispatch, but you're unlikely to get any inlining through it. Avoiding the data indirection will give you most of the benefit of enum dispatch.

... and separate compilation. OP days the desugar should be done

which makes it "work" but at the cost of breaking separate compilation. This is somewhat acknowledged by

but that severely underrepresents the magnitude of the impact. It's doable (the unsized locals feature exists) but not without major concessions in one direction or the other.

Since this is discussed as an alternative to dyn Trait, there's an implicit understanding that it'd be limited to object safe traits.

1 Like

Sized traits not being object safe implies it works for some traits beyond object safe ones.

1 Like

Isn't this basically what we have for Generics (just without mentioning it in the generics list, similar to how impl MyTrait works for functions)? Though granted, with generics any intermediary library can "finish" a generic function by specifying the type instead of being generic itself. So I'd argue we already have such a break in separate compilation, at least to some degree.

Probably true, but as stated above: Isn't this basically how generics work in Rust: Functions and Types are a abstract representation until they're used and a type is specified and only after that point inlining and actual size/layout calculations happen. So in effect this would behave more like a generic type than an unsized type (unless the trait itself is unsized of course, see below).

I haven't thought much about that part and intended this mostly to avoid issues with unsized types. By requiring that the trait implementations (at least those present in the enum) be sized, at least while we don't have unsized locals. (and because I don't know if enum variants can currently contain a unsized type (similar to structs can have up to one unsized type at the end).

Regarding associated types or GATs: I can see three ways how this could work:

  • Don't allow traits with them to be used in this way (i.e. only object safe traits, unless I'm missing something here that makes this impossible, maybe I've used sized vs unsized wrong), this would probably be the best option, since dyn Trait doesn't allow them either (probably for similar reasons)
  • Don't create functions that depend on the GATs for the enum (that'd probably be confusing)
  • Create those functions but with another enum for its (return) type, granted, the only way I can think of to make this work with function parameters is a runtime panic if the type doesn't match, so this is probably not a good option and it's somewhat opinionated, since the function could also return a dyn CommonTraitsBetweenReturnTypes), though both are problematic with the same issues mentioned above.

I think only allowing object safe traits would be the way to go.

I see, or at least I think I see that you mean something such as <T: MyTrait + Sized> rather than trait MyTrait: Sized {}. To me limiting this to object safe traits takes much of the excitement out of this proposal that I had. Here is an example where i've wrapped a non-object safe trait in an enum to be able to store it in a Vec. Which is a minimized workaround from some actual project...

In this case the enum entirely ignores the associated types, which only comes back into play because I also implement the trait for the enum (But i'm not sure that is always the right thing to do especially considering something like specialization).

Anyhow, to me allowing some form of open variation of this rather than the closed enum type would be a welcome addition to the language, especially if it reduces the boiler plate which balloons as you add types/trait functions. This example goes a bit beyond that the enum has bounds on the types, that aren't on the trait itself. Which is also a thing to consider...

Apparently I have to read up on the difference between <T: MyTrait + Sized> and trait MyTrait: Sized {} until now I've seen them as basically identical.

I think your example should still be possible though. The main reason traits containing GATs are problematic (for those enums) is the generation of the trait itself for the enum, but it could also be feasible that the enum only gets the trait implemented if the trait does not contain GATs. That way we could still use the enum and match statements (as shown in the example from @ratmice). But implementing the trait (even manually implementing it) could be difficult because the user of the library could add more variants.

Unless there is something else inherently problematic with Sized that I don't know of, but as long as you can manually put the types into an enum it should work, I think.

I honestly have no idea how that could be done. It would probably require a wrapping type struct Wrapper(enum MyTrait) on which traits can be implemented, though that might require panicking for types not part of the match statement, which can't be avoided because the enum is non-exhaustive. Since the enum type is effectively shared between everyone using it multiple crates could want to implement a trait on it, which would result in conflicts. And resolving them by looking which variants are in the match arms probably isn't a good idea.

I'd love it if it works for ?Sized (unsized) types/traits, too, resulting in an enum with a dynamic size in memory.

Whatever the solution end up being I think it should give a clear answer to what should happen in these examples:

trait MyTrait {
    type Foo;

    fn make_foo(&self) -> Self::Foo;
    fn consume_foo(&self, foo: Self::Foo);

struct MyStruct;
impl MyTrait for MyStruct{ /*...*/ }

fn example1(data1: enum MyTrait, data2: enum MyTrait) {
    let foo = data1.make_foo();

fn example2(mut data: enum MyTrait) {
    let foo = data.make_foo();
    data = MyStruct::new();

Overall I'd say my opinions seem to align well with what @DragonDev1906 said about just generating the enum in these kinds of cases with even just associated types, and letting the developer derive the trait using match statements, perhaps with some syntax sugar, which on the whole to me feels like a form of for<T: impl Trait>. However this then begs the question of which compilation unit is deemed the parent of the generated enum for the purposes of the orphan rule.

But to me at least for my purposes just generating an enum composed of the impl's for all compilation units, and leaving trait impl's to the developer would go a long way towards rounding off some of these rough edges.

So my tendency here would be to say that absent someone providing impl MyTrait for enum MyTrait (which isn't not great syntactically, but I don't have a better idea), due to the associated type the compiler shouldn't attempt to provide an impl of the type.

I would probably even consider taking (not providing trait impls for the enum at any time) as the rule for all enum MyTrait to avoid these special cases, but that is certainly a preference, it probably needs to be decided though because it would seem difficult to introduce automatic impls when they can be, without introducing overlapping impls where none was previously generated by the compiler. Unless there is a mechanism in place for the compiler to provide a default trait impl, which all other impls take precedence over that I'm not aware of (but haven't really looked at that sort of problem before).

I agree, here is what my intuition was when I saw these examples: By now, I don't think we can (or should) decide what should happen in those situations, and I've just realized that the following example, which doesn't use associated types at all is problematic, too:

trait MyTrait {
    fn add(&self, other: Self);
fn example3(data1: enum MyTrait, data2: enum MyTrait) {

What should happen if data2 has a different variant than data1?

Either way, here are my thoughts on the examples from @SkiFire13:


We cannot let foo be of type data1_variant::Foo, since at compile time it is not known what variant we have. So the only thing we really can do (if MyTrait is auto-implemented) is to let foo be of type enum MyTrait::Foo (probably terrible syntax), dyn FooTrait where FooTrait is implemented for all types that can be in MyTrait::Foo given the implementations for MyTrait, or Box<dyn FooTrait> or similar. I don't think we can choose that for the developer, so we probably have to forbid this situation from happening (by restricting when the trait is auto-implemented), or let the developer choose by not having type inference. Another potential type foo could be is Option<MyStruct::Foo>, which would return None for all variants where the type doesn't match. That should give the developer the most options, though it probably requires more work in the compiler.

Consuming foo is more difficult, at the moment I don't know of any reasonable solution (regardless of the type of foo). It would probably require changing the signature of <enum MyTrait>::consume_foo to return an Option or Result (maybe again without return type inference), panicking or (again) not allowing this situation to happen by not deriving MyTrait for the enum, thus requiring a match statement here.


For make_foo we have the same options as above. The data = MyStruct::new() is effectively equivalent to data = <enum MyTrait>::MyStructVariant(MyStruct::new()), since data is of type enum MyTrait. Thus the consume_foo line has the same issue/restrictions as in example1 (unfortunately). Similar to how a normal enum works you'd need a variable for calling consume_foo before putting it into data, if you want to call the function without the issues described above:

fn example2(mut data: enum MyTrait) {
    let foo = data.make_foo(); // cannot infer type here (as described above)
    let tmp = MyStruct::new();
    data = tmp;

I think that can only be the compilation unit that defined the trait (i.e. even though it does not know all variants by itself, it is the one "responsible" for the trait (that could be quite limiting since then only one crate can implement traits for the enum, so everyone else would need to build a wrapper around it). Another option would be that the enum MyTrait type is "owned" ("the parent of") for the enum everywhere it is used in this compilation unit. Basically enum MyTrait is kind of already seen as a wrapper around the enum from the viewpoint of the orphan rule. If compilation unit A creates a enum MyTrait and implements serde::Serialize on it (just an example), then passes that value to compilation unit B via a function call, from the viewpoint of B there exists no serde::Serialize implementation, since it wasn't implemented by the crate defining the trait or B. Though I'm not sure if that'd be possible. If it was we'd probably already have such a rule instead of the orphan rule. We probably don't because it could be confusing why B doesn't see the trait implementation even though we've just written it in compilation unit A. Is that a good idea: I don't know.

The enum MyTrait could behave like an impl Wrapper in that the function expects any of the wrappers, regardless of who defined it. Since they all share the same memory layout the main overhead would be that the function may exist twice (bigger binary size) if its assembly depends on whether a trait is implemented by the caller. Though I could imagine this slowing the compiler down due to having more types. I don't know enough about the decisions behind the orphan rule, so I don't know if that's a good idea. It certainly sounds more like something that could be done to improve/loosen the orphan rule itself, as it would have an effect outside of trait-based enums (for example in serde), would be more like transparent wrappers with the main purpose being separation of trait implementations and would likely break backwards compatibility if implemented not only for new constructs like trait-based enums.

This sounds a lot like variant types, which is I think a somewhat planned feature?

I think this is not currently how generics work in Rust. In functions, when you have impl Trait, that is implicitly creating a generic parameter, which gets resolved to a concrete instantiation during monomorphisation. Monomorphisation can happen in parallel -- different translation units can potentially instantiate the same generic function, and then the duplicate instantiations can get eliminated at link time. There are some differences to how it works in C++ -- because rust generics always have trait bounds and can only be used via the trait interface, you can generate much of the code for a generic function before knowing a concrete type. But at a high level there is a lot of similarity.

By contrast, asking questions like "what is the size of the largest type that implements Foo trait" is not something that the compiler can know during monomorphisation and codegen as they currently exist, because it requires whole program analysis. If knowing the size of enum MyTrait is required to compile any translation unit, but also the question can't be answered without looking at all the code, then this is a massive change to the compiler, and requires an extra compliation pass at the beginning before any code can be generated, during which it would collect all data needed to support the enum Trait feature. So realistically, that's something that would take on the order of years of work, if it ever happens.

The suggestion to use InlineBox<dyn Trait, MaxLayout> strikes me as a very good one. That's going to get you the vast majority of your bang-for-your buck in this feature request. I don't even think you need the "magic" to determine MaxLayout, the developer can just make an educated guess for that constant, or accept some default which is often a good tradeoff. If it is ever determined by profiling to be performance critical in a real program, then the developers can compute the right value and commit that to the code. That's something you can do today without waiting for fundamental changes to the compiler. And it's also very simple and easy to debug, not relying on fancy whole-program analysis magic which might work on some builds but not others. (E.g. if I'm running cargo bench -p my-crate, does the magic determine a different number than if I cargo build all, and therefore throw off my benchmarks?) So if I were in a situation where I really needed this feature for some reason, that would be my go to solution.

1 Like

As a side comment -- if rust ever can start using the LLVM devirtualization stuff during LTO, then most of the cost of InlineBox<dyn Foo, ...> can be eliminated in cases when there's only one actual implementor a trait in your whole program. This works very well in clang for a few years now, and is thought to work very well for the case OP describes, where a framework wants to have many customization points but not pay dyn overhead everywhere. Because often a user of such a framework may not make use of the customization point, or may use it, but still only have one implementor that is actually used in their whole program at the end.

1 Like

On nightly via feature(ptr_metadata) or feature(set_ptr_value), or on stable via crimes (assuming you know the fat pointer layout), yes. Unfortunately there's no stable way to do an inline trait object box yet (short of implementing a manual vtable).

If there is, I'd love to know. While I don't publish an inline box crate yet (for this specific reason, though I have indyn reserved for this), I do have a family of crates whose purpose is to do pointer tricks in a safe and stable manner.

While it's not going to be an optimization benefit without whole program devirtualization, I do have a concept for implementing global resources as a library (linker tricks) which I need to test out. When you can, carrying the dependency injection member around is marginally preferable (since you can mix and match), but having a global resource is genuinely a good choice in some cases.

1 Like

I guess I was thinking of this crate: smallbox - Rust

Maybe I don't understand what you mean exactly by inline box

Edit: I see, maybe you want to inline the v-table?

SmallBox is going the crimes route. It's unfortunately common in the ecosystem, due to Rust not providing a way to do this properly yet. Even dyn_clone relies on this crime to be functional.

The order of fat slice and trait object pointers being address then metadata is extremely informally in the "we won't change it without good reason" bucket, and there's no theorized reason why we might want to change it, but it's not guaranteed.

I have a few features I "should" be putting the legwork into getting available on stable. set_ptr_value is definitely on the list, although after layout_for_ptr (similarly a fundamental capability) and alloc_layout_extra (just useful API surface that's gone unchanged for years).


I see, I did not properly appreciate this until now. Thank you

So this is probably also why their layout is:

pub struct SmallBox<T: ?Sized, Space> {
    space: MaybeUninit<Space>,
    ptr: *const T,
    _phantom: PhantomData<T>,

and they do not try to share space between ptr and space as small string optimizations do.

Because if T turns out to be a dyn something, and ptr is fat, the API still requires them to be able to return &mut T, and supply the extra stuff. If space is the pointee then the vtable pointer isn't there, and the simplest way to preserve the vtable stuff is if they just hold a *const T and the compiler preserves it for them?

Yep, pretty much exactly. There's a couple more wrinkles to consider — variance technically is impacted by the pointer, and pointer unsizing coercion requires the type to contain a singular pointer to the unsized type (along with an unstable trait implementation) — but the primary reason to hold a pointer is that it's the only way to carry fat pointer metadata. (Unstable you have <T as Pointee>::Metadata, but that can't undergo pointer unsizing coercion without first being rehydrated into a pointer.)