Why can't an enum consist only of a ZST and a single DST?

More generally, why can’t an enum be dynamically sized?

I ran into this when implementing this set of functions:

trait Foo<T> {
	fn f1(self) -> Self;
	fn f2(self) -> (T,Self);
	fn f3(self) -> Option<Self>;
}
//the trait bound `Self: std::marker::Sized` is not satisfied [E0277]: 
//the trait `std::marker::Sized` is not implemented for `Self` 
//(consider adding a `where Self: std::marker::Sized` bound,
//required by `std::option::Option`)

Which led me to

enum Bar<T:?Sized> {
	None,
	Some(T)
//the trait bound `T: std::marker::Sized` is not satisfied [E0277]: the trait `std::marker::Sized` 
//is not implemented for `T` (consider adding a `where T:  std::marker::Sized` bound, only
// the last field of a struct may have a dynamically sized type)
}

But is there some fundamental reason that an enum can’t consist of a DST, as long as it doesn’t have any other non-zero types? It would seem to be possible for the same reasons that a struct or tuple struct can have a DST as its last item.

1 Like

I don’t know what the real reason was, but definitely now that would be a very difficult change.

Option (and any enum that looks like it) has an optimisation where T has a NonZero field (e.g. if it contains a &-ptr).

This optimisation would no-longer be valid if you could coerce Option<&T> to Option<SomeTrait>, and since the optimisation is (as far as I know) defined by the language, it can’t be removed without breakage.

Your second line is a bit confusing. Can you elaborate on that optimization?

Basically, an Option<&T> can be (and is) optimized to just a raw pointer, because &T is guaranteed to be non-null, so the None state can be safely mapped to the null pointer value. You can see this in the playground using size_of: https://is.gd/G5aMjX.

Oh right. So I understand that enums optimization, I believe. In theory (modulo what is actually implemented), any type in an option (or similar enum) that is known not to be able to inhabit all of the possible bit representations of itself, can pack other possible enum states into T.

That said, I’m still confused why this matters for Option with no &.

See,

2 Likes

To make it more clear to everyone else, that example is missing the outer pointer. A better one would be "coercing &Option<String> to &Option<Trait>.

The problem is that Option<String> is the size of String, with the data pointer set to null for the None case. But there's no way to know that crucial piece of information for Option<Trait> (without putting it in the vtable, I guess?), and since all you have is a reference to Option<String>, you can't add that optimized out discriminant back.

1 Like

It would be interesting if there were a marker trait Foo(since I can’t think of a good name right now), that other traits could implement that would, in turn, require any implementing structs to fit that constraint that their representation must allow room for such a discriminant.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.