Now that we’ve had time to gain some experience with variance, I’ve come to think that we should subset the Variance RFC design and make all traits invariant with respect to their input types (and lifetimes). The primary motivation for supporting variance on traits was so that closure objects (e.g., Box<Fn(T)>
) would exhibit variance, but in fact they do not. This is because the closure return type is now an associated type, and any trait with an associated type is now always invariant with respect to its inputs.
The remaining use cases for variance and traits feel somewhat limited to me. One can write a Getter
trait, for example:
trait Getter<A> { fn get(&self) -> A; }
But in this case the type A
is almost certainly better modeled as an associated type. Similarly, one can use Default
, and infer that because &'static T: Default
, we know deduce that &'a T: Default
, but this doesn’t really occur in practice and in any case could typically be handled with higher-ranked trait bounds. (In the worst case, a need for variance can be worked around with a new type or wrapper.)
On the other hand, inferring variance for traits requires MarkerTrait
and PhantomFn
to ensure that all inputs are used. Given the lack of a strong use case, it doesn’t seem worth requiring these markers.
The effect in practical terms
Basically this change would mean that we can deprecate MarkerTrait
and PhantomFn
and remove them by 1.0. Trait matching would always be invariant. PhantomData would be unaffected.
In principle, there is some loss of expressiveness, but I’ve done the experiment to implement this in the compiler and no existing code was affected, other than tests. I wouldn’t expect impact in cargo either.
Future possibilities
If we opt to make all traits invariant now, then we could still permit variance in traits via some sort of opt-in. This might then be useful also for associated types. We’d have to work out precisely what this means and how it should look, but presumably at that time we’d have strong use cases to use as a model.
It is also maybe possible to go back towards an inferred system, but there are some corner cases (such as impls that might overlap if trait matching is variant) that could potentially break, though I judge these unlikely to occur in the wild (but you never know). Another problem would be handling unconstrained cases (i.e., those cases that today require marker traits and phantom fns).
Thoughts?