I've sometimes regretting going with inference. It might be interesting to experiment with an opt-in explicit system (for example, by adding some crate annotations) that std could use to prevent this sort of thing in the future.
Of course one very simple way to do it would be to add annotations that assert the variance will be such-and-such, similar to what we do in the unit tests (though that just dumps out the variance). That is, keep the variance, but assert that the results are what we expect.
The way I would want to do is this: no annotations means covariant (or contravariant, for lifetime parameters). You can add an annotation to indicate otherwise. At the crate level, we assert that we want all these annotations to be present. This would then give us some idea of the maintenance burden.
Hmm, so an opt-in lint is actually an interesting point in the design space here. The idea would be to add some way to signal when you expect the variance to stray from the default (an annotation), and then have a lint (allow by default) that triggers whenever that is the case. We could also do the same for types that are cannot be Send or Sync (in that case, the opt-in that indicates that this is expected would be a (redundant)
impl !Sync, I guess, as we sometimes add).
This would then mean that small projects don't have to worry about this, but larger libraries could turn on the lint to avoid accidental regressions.