Std variance testing

There have been quite a few instances in std recently of variance not being right or changing unexpectedly after some patch. See e.g. and

Variance is kinda invisible and hard to think about and this is a drag, and if it’s hard for us with std it’s got to be super-hard for others to obey semver in the presence of variance. Better tools would help. Here’s what I think would be swell, at least for std:

First, a ‘variance-lock’. We have the compiler dump the variance of everything in std to a file, and commit it. Every test run we re-run the variance dump and compare. Similar to other tools we’ve got.

Second, a variance-policy. When we add new features, what is the variance? Even if it was as simple as everything must be invariant, then you have to take some manual step to loosen the variance. Tool-enforced.

I’d be very curious to see the variance of all our types side-by-side, what kind of inconsistencies we have and don’t know about.

1 Like

Big thumbs up from me. I think it would also be helpful to have a similar dump for Send and Sync and for dropck-related info exposed through PhantomData and #[unsafe_destructor_blind_to_params].

Inferred variance is helpful as it pretty much just does the right thing, but I think that we should seriously consider invariant-by-default, explicit opt-in to covariance for Rust 2.0, possibly with a CoerceCovariant trait similar to CoerceUnsized.

We should also look into the variance implications of impl Trait.

1 Like

We already have some crude support for variance tests (example).

Right now it's invariant, that's the safest assumption and imposes no restrictions on the definition side. Anything else would have to be more thoroughly checked (and disallowed if it doesn't match what's being exported).

I've sometimes regretting going with inference. It might be interesting to experiment with an opt-in explicit system (for example, by adding some crate annotations) that std could use to prevent this sort of thing in the future.

Of course one very simple way to do it would be to add annotations that assert the variance will be such-and-such, similar to what we do in the unit tests (though that just dumps out the variance). That is, keep the variance, but assert that the results are what we expect.

The way I would want to do is this: no annotations means covariant (or contravariant, for lifetime parameters). You can add an annotation to indicate otherwise. At the crate level, we assert that we want all these annotations to be present. This would then give us some idea of the maintenance burden.

Hmm, so an opt-in lint is actually an interesting point in the design space here. The idea would be to add some way to signal when you expect the variance to stray from the default (an annotation), and then have a lint (allow by default) that triggers whenever that is the case. We could also do the same for types that are cannot be Send or Sync (in that case, the opt-in that indicates that this is expected would be a (redundant) impl !Sync, I guess, as we sometimes add).

This would then mean that small projects don't have to worry about this, but larger libraries could turn on the lint to avoid accidental regressions.

1 Like

We could of course do this in a side-tool. But it seems like it might be easy enough / better to add as a lint. Or maybe we start with the tool and consider the lint later when we've had more data and experience.

I like the lint idea, but I think I’d want the syntax to look more like built-in—it should appear like an optional type signature rather than some second-class annotation thing.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.