I've been searching and couldn't find any previous discussion of this specific issue.
GATs are great! I've been using them in one of my projects and they've been making huge improvements on some of my type-system woes. But I've ran into an annoying gotcha in the form of variance.
I've made a GAT that generalizes over &'a T and std::cell::Ref<'a, T>. Both of those are covariant in their lifetime argument. But it seems like GATs are always invariant in their lifetime arguments. (Of course, if there is no way to express variance explicitly, this is the only reasonable choice – otherwise, there would be a soundness hole when a consumer relies on covariance and a downstream impl uses an invariant type constructor.)
This means you get a lifetime error from the following innocent-looking code:
#![feature(generic_associated_types)]
trait Trait {
type GAT<'a>: Eq;
}
fn eq<T: Trait>(left: T::GAT<'_>, right: T::GAT<'_>) -> bool {
left == right
}
error[E0623]: lifetime mismatch
--> src/lib.rs:8:13
|
7 | fn eq<T: Trait>(left: T::GAT<'_>, right: T::GAT<'_>) -> bool {
| ---------- ---------- these two types are declared with different lifetimes...
8 | left == right
| ^^^^^ ...but data from `right` flows into `left` here
I would like to mark my GATs as covariant in their lifetime argument (requiring covariance from implementors, and guaranteeing covariance to consumers). But unless I'm missing something, Rust has:
no way to explicitly refer to variance in code, either to guarantee it (for a specific type constructor) or to require it (as in a trait bound)
no way to see the variance of a generic type's arguments in the documentation (so, for example, two types can behave differently despite appearing to have identical public interfaces, even without GATs, as in this playground.)
no well-documented way to make wrappers that use unsafe code to override the default variance (the Reference and Nomicon pages on variance don't mention any way to control variance; the Nomicon page on PhantomData suggests some specific usages, but you wouldn't know that you need to look for the page on PhantomData if you were searching for information on variance; and even with that information, I don't see a practical way to relax variance rather than tighten it.)
Unfortunately, it's not obvious what to do about any of this – I can definitely see the downsides of the obvious approach of "add a noisy, unfamiliar new syntax to control this". Have there been any previous discussions on how to improve this situation (either for GATs specifically or for Rust's handling of variance in general)?
Perhaps it's not well-documented, but there is no way to relax variance, and for a good reason: it would be unsound. What unsafe code can do to relax the variance is using pointers and casting them. For example, instead of &mut T (invariant), use *const T with &mut *(v as *mut T).
Is there a theoretical reason it would be immediate UB? I was assuming that there could be circumstances where you could relax a variance as part of a safe abstraction (knowing that private code inside a module doesn't do anything that would make it unsafe). In my case, the thought was to work around the GAT issue by marking the trait unsafe trait and requiring that implementors use a type constructor that is actually covariant, then making a wrapper struct around the associated type that makes it officially covariant.
This isn't legal code (yet?), but for the specific case of Eq you could potentially avoid needing to specify variance by implying it in the trait bound:
Kotlin has in and out generic types that force types to be variant/contravariant with respect to the marked generic, couldn't we add something like this? Of course for this to be sound trait implementers should be forced to use variant/contravariant types where required.
fn eq<T: Trait>(left: T::GAT<'_>, right: T::GAT<'_>) -> bool {
left == right
}
While this would (rightly) be a compile error due to incorrect variance:
impl Trait for () {
type GAT<out 'a>: fn(&'a i32);
}
That seems relatively straightforward, as things go. It also provides a clear answer to the question "how do you specify variance bounds in generic functions?": specifically, the only way to specify a type constructor as a generic parameter in the first place is to use GATs, so you express the bound by having it be written in the relevant GAT.
I suppose it would be logical if one could also write in or out on parameters to regular generic structs, not just GATs. Ideally, that would be how you express the public interface of your struct, and if you don't specify that, code outside your module would have to assume that the parameters are invariant, just like the current assumption in GATs. Unfortunately, that would be a breaking change to current Rust, where I guess the public variance is determined implicitly? Oh dear, so in situations like the playground I posted earlier, that means that current Rust has a situation where it's a breaking change for a library to add a new private member to a struct that already has private members. I wonder if there's a possible path to deprecation of implicitly exposed variance… I'm probably getting ahead of myself there, though.
Rust chose inferring variance for a reason. IIRC, an analysis has shown that explicitly stating variance is almost always not right. This research paper (referred to from The Guide to Rustc Development - Variance) explains the drawbacks of explicit use-site variance and explicit definition-site variance and why inferring variance is better. For GAT, however, explicit variance is reasonable.
Variance inference is a rare case where we do non-local inference across type declarations. It might seem more consistent to use explicit declarations. However, variance declarations are notoriously hard for people to understand. We were unable to come up with a suitable set of keywords or other system that felt sufficiently lightweight. Moreover, explicit annotations are error-prone when compared to the phantom data and fn approach (see example in the section regarding marker traits).
It would work in a manner analogous to lifetime inference: if nothing is specified, the variance is inferred, much as it works now. However, when more advanced uses become necessary, explicit override directions can be given.
The main difference I see (from a user and UX perspective) is that the compiler should, even with explicit annotation, reject unsound variance annotations i.e. even with explicit annotations it should still do a soundness check. Which brings me to the main unknown with that: I'm not convinced that such a "varck" is even decidable by a TM in general. Is it?
Also, a slight bikeshed on would-be variance specifiers: please no in and out, those have never made any sense to me whatsoever. Which specifies covariance? Which specifies contravariance? And why? What's the link with the terms in and out? There's always this impedance mismatch between what I'd need tow rote and what I'd want to accomplish.
Rather, I'd be in favor of co and contra, at least those would be immediately obvious. If there's also a specifier necessary to explicitly make something invariant, then perhaps inv, although that might be a bit ambiguous to a human reader as it could in principle means "inverse" (even though that meaning makes no sense in this context).
No, variance is about whether a subtyping relationship between two types T and U implies another subtyping relationship between A<T> and A<U>.
&'a &'a (): 'a, &'a mut &'a mut (): 'a and fn(&'a ()): 'a are all true, but they doesn't tell us anything about variance., &'a &'a () is covariant with respect to 'a, while &'a mut &'a mut () is invariant with respect to 'a and finally fn(&'a ()) is contravariant with respect to 'a.
You sure you don't mean fn(&'a ()) for contravariance? If I accept fn() -> &'a () for some fixed lifetime, I can provide a fn() -> &'static (), because every return value of the pointer will love for at least as long as whatever 'a is.
In general, input position is contravariant, output position is covariant; this is also why &'a T is covariant in T. &mut T is invariant because it's both an input and an output.
FWIW, I agree that explicit variance makes sense for GATs. Though...
To me, co seems fairly non-obvious, even as someone who knows what 'covariant' and 'contravariant' mean. And the vast majority of Rust users don't know what those terms mean.
I don't have any great alternatives, though.
Here is one not-so-great possible alternative:
Invent a SubtypeOf trait, and write something like this:
trait Trait {
type GAT<'a>: Eq where
for<'a, 'b: 'a> Self::GAT<'a>: SubtypeOf<Self::GAT<'b>>
}
However, this would require (1) the ability to write lifetime bounds in HRTBs (at least if you don't want an ugly workaround) and (2) the ability for the compiler to actually understand subtyping via trait bounds.
and so on for any place with potential generic parameters.
struct GhostFree<#[covariant] T>;
this would only be able to add constraints / reduce variance, erroring (or requiring unsafe) when it would conflict with the variance of any of its fields:
struct Fails<#[contravariant] T>(T); // Error, loosening variance is tricky and error-prone and requires `unsafe`
Both it being a mouthful (no in / out shorthands) and requiring unsafe for cases where variance is loosened seem appropriate for a mechanism that is easy to get wrong, leading to unsoundness.
In an ideal worlds, NonNull<T> would be #[invariant], and users of it that know what they're doing (mainly, that their pointer either represents ownership, or that it disallows mutation of the pointee) would purposedly opt into #[covariant]-ce through unsafe.
I find the current situation where NonNull<T> may be used as a niche-optimized *mut T for Mut structs to be very error-prone.
The way I have incorporated explicit variance into a trait in the past is via requiring the implementor to provide an upcast implementation compatible with the variance that I need.