You sure you don't mean fn(&'a ()) for contravariance? If I accept fn() -> &'a () for some fixed lifetime, I can provide a fn() -> &'static (), because every return value of the pointer will love for at least as long as whatever 'a is.
In general, input position is contravariant, output position is covariant; this is also why &'a T is covariant in T. &mut T is invariant because it's both an input and an output.
FWIW, I agree that explicit variance makes sense for GATs. Though...
To me, co seems fairly non-obvious, even as someone who knows what 'covariant' and 'contravariant' mean. And the vast majority of Rust users don't know what those terms mean.
I don't have any great alternatives, though.
Here is one not-so-great possible alternative:
Invent a SubtypeOf trait, and write something like this:
trait Trait {
type GAT<'a>: Eq where
for<'a, 'b: 'a> Self::GAT<'a>: SubtypeOf<Self::GAT<'b>>
}
However, this would require (1) the ability to write lifetime bounds in HRTBs (at least if you don't want an ugly workaround) and (2) the ability for the compiler to actually understand subtyping via trait bounds.
and so on for any place with potential generic parameters.
struct GhostFree<#[covariant] T>;
this would only be able to add constraints / reduce variance, erroring (or requiring unsafe) when it would conflict with the variance of any of its fields:
struct Fails<#[contravariant] T>(T); // Error, loosening variance is tricky and error-prone and requires `unsafe`
Both it being a mouthful (no in / out shorthands) and requiring unsafe for cases where variance is loosened seem appropriate for a mechanism that is easy to get wrong, leading to unsoundness.
In an ideal worlds, NonNull<T> would be #[invariant], and users of it that know what they're doing (mainly, that their pointer either represents ownership, or that it disallows mutation of the pointee) would purposedly opt into #[covariant]-ce through unsafe.
I find the current situation where NonNull<T> may be used as a niche-optimized *mut T for Mut structs to be very error-prone.
The way I have incorporated explicit variance into a trait in the past is via requiring the implementor to provide an upcast implementation compatible with the variance that I need.
My primary concern here is for the convenience of the trait consumer. If I'm writing a complex library that provides a trait implementor, I'm okay with writing some boilerplate, but I absolutely do not want authors of downstream crates to have to fight with weird lifetime errors in situations that would normally Just Work (and that's definitely gonna happen, even if the documentation warns you about it). The upcast_gat approach is definitely useful as far as making it work at all - I actually thought of the upcast_gat approach as a workaround before I posted this thread - but it's not terribly satisfying as a solution. (And a macro wouldn't help with that aspect, because the problem for downstream crates is that they have to think about this issue in the first place.)
The issue is that while a macro can help the implementor of the trait out, it does nothing to help the consumer of the trait, who has to know about the explicit variance cast.
I'm a bit late to this, but last year I attempted a number of a similar techniques for managing variance in sundial-gc), it proved to be untenable. At the time I did not see a way forward without #[covariant] or some other form of compiler support.
And it seems to be working. This is fine for my use case, because I need the wrapper struct for other reasons as well (I want to be able to implement inherent methods and std traits for it, which you can't do for associated types).
Pity it requires transmute_copy instead of just transmute because Rust does not assume that A::B<'static> has the same size as A::B<'a>. But it's alright - this particular GAT is required to be Copy anyway, and it's fortunate that the ugliness is nicely encapsulated in a pretty simple wrapper.
Perhaps Rust doesn't assume it, but the fact that lifetimes are a purely compile time construct would directly imply that A::B<'static> necessarily has to have the same size as A::B<'a>.
No, lifetime specific is unsound. It's the whole reason why full blown specialization is unsound as currently implemented. And why min_specialization was developed
I know, but that's the point: we can't just block specializing on lifetimes, because it's too common and often hidden, even between crates (an example is specializing T on (T, T)).