Explicit variance annotation has been discussed previously:
Personally, I think covariance and contravariance should be treated like auto traits. For example, consider the following:
trait WithGat {
type Gat<T>;
}
impl WithGat for () {
type Gat<T> = Option<T>;
}
struct Foo<T: WithGat, U> {
a: T,
b: <T as WithGat>::Gat<U>,
}
Under current rules, the above is covariant in T and invariant in U, for any T and U. However, Foo<(), U> could potentially be inferred covariant in U—analogous to how auto-trait inference works today.
A more trait-bound like model of variance would give extra flexibility in other ways. For example, it would allow constraining variance only at the use-site:
#![feature(trait_alias)]
// Imagine that only some `MyDeref` impls can provide covariance
trait MyDerefCovariant = MyDeref
where
for<'a> (<Self as MyDeref>::Target<'a>: covariant_in<'a>);
fn foo<'a, T: MyDerefCovariant>(a: T::Target<'a>, b: T::Target<'a>) {}
fn bar<'a, 'b, T: MyDerefCovariant>(a: T::Target<'a>, b: T::Target<'b>) {
foo::<T>(a, b)
}