Getting more testing of unstable features

A tracking issue for a relatively small feature just launched an interesting conversation about a stabilization conundrum:

  • There are plenty of features/APIs that are unstable, but are too minor for a library to consider switching to nightly Rust just for their sake. And with stable Serde, more and more of the ecosystem is using the stable compiler.
  • We’re often flummoxed when considering potential stabilization, because we don’t have much feedback on the feature to go on.

It’s tempting to introduce some level of stability between unstable and stable, where we allow you to use the feature (behind a flag) on the stable compiler, but retain the freedom to change or remove the feature if circumstances demand it. The idea would be to do this only for features that are very close to stabilization.

But I can’t see my way around a fundamental issue here: it’s still possible for you to upgrade to a new stable version of Rust, and find that one of your transitive dependencies no longer compiles. This kind of situation is the root of dependency hell, and we’ve tried to architect policies precisely to avoid it. (With elaborate source for dependencies, and a semver analysis tool, we could get close to an iron-clad guarantee here).

Unless I’m missing something, this line of reasoning indicates that stable means stable; if we wish to ensure that you never dread updating Rust, we can only ship things on the stable compiler that we’re willing to support indefinitely.

Is there another way of seeing things, or some other avenue we could take to ensure that features get sufficient feedback?

4 Likes

I think that as the community grows further you will get more feedback. Just give it some more time. :slight_smile:

I love the stability guarantees that come with the current policies. I would hesitate to change it. But if I force myself to think about it, the beta channel could be an intermediate playground where feature gates could be relaxed to some variable extent. The barrier to use the beta channel is lower than to use nightly. With nightly comes the risk, that random bad things may happen every single day. That’s quite a barrier. With beta+loosened feature gates there would only be possible dependency hell, as you described it, but not so much risk for completely random daily breakage.

Maybe someone else wants to extent that idea, but my point is: it should be on beta only - if at all. :wink:

EDIT: This would also increase the risk, that the ecosystem on the beta channel could gain so much momentum that it could lock down the ability of developing rustc. So I think you should be careful to stop it from getting to much momentum and becoming too significant.

3 Likes

In the release announcement post, you can always have a section “stuff to look forward to… come help us test them before getting them onto stable”. That would help get people excited about what’s coming, some of whom may go impatient and wanna go taste, and there will (hopefully) be some who just wanna volunteer.

9 Likes

What @scott and I are thinking is not messing with “stable means stable”, but instead changing what proves you’re in the stable “safe zone”; Our proposal is lang features propagate so downstream needs to approve of upstream’s extensions.

Currently, if you use a stable compiler, you are always safe. If you use an unstable compiler you are not always not safe because your dependencies may use language features unbeknownst to to you. But forcing users to download new tools perhaps is too much of a hurdle. With our proposal, we still preserve the property downstream ensures, by default, upstream isn’t misbehaving, but the barrier to intentional entry is just a bit lower.

I think preventing accidental unwanted instability is much more important than making all instability painful, but as the latter can enforce the former the distinction has been muddled in the past.


Two added bonuses

  • Stopping the propagation from the stdlib (with e.g. a signature from mozilla saying “don’t propagate”) seems a bit cleaner than jiggering flags, and allows stdlib-aware Cargo with stable rustc to more easily rebuild the standard library.

  • Nightly/beta actually can become more strict if we make the same propagation rules apply to all. This is good because it decouples ongoing guarantees from “newness”: e.g. you hit some bug and need to use an unstable compiler temporarily, but don’t want to accidentally start transitively depending on random features if you bump a dependency.

In principle, new language features can be tested on rustc/libstd, which have unique versioning/stability properties, at least for now, until there’s no requirement for Rust 1.N to be buildable from Rust 1.(N - 5) or something. This needs to be done pro-actively though, e.g. by creating issues like “find all patterns in the code that can be replaced with let x = loop { .... } and rewrite them using this construction”.

The downside is larger number of #[cfg_attr(stage0, feature(xxx))] and therefore less trivial bootstrap compiler updates + one cycle (?) delay between features appearing on nightly and in the bootstrap compiler. (Many times I wanted to use some shiny new features in rustc, but didn’t due to extra features/cfgs. I’d certainly use them if it was officially encouraged.)

1 Like

But won't that mean that introducing a new unstable feature in your crate is a breaking change, in the sense that everything downstream from you now needs to opt in?

1 Like

That's a good point; we've wanted to do more advertising of what's happening in early development anyway.

1 Like

Maybe this can be done on something that doesn't have transitive dependencies, for example executables? Is it very bad when some tool installable with cargo install stops compiling on stable? Hm, probably yes...

If there were a way to enable the use of unstable features in stable compiler versions, there would be less disincentive to experiment with them:

  • Breakage can occur “only” every six weeks rather than at any time.
  • Less potential implementation instability.
  • No need to download another compiler package (even if rustup makes this pretty easy).

Hypothetical alternative: crates that want to use unstable features have a special Cargo.toml key, which translates to a rustc command line flag. (This could also be on a per-Cargo-feature basis.) Stable crates are prevented from depending on unstable crates, so you still have to opt in to potential breakage.

1 Like

If this also included a rough estimate of when the feature might be stabilized, that could help developers decide when to start using it on their master branch, with the plan that it would likely be in stable Rust by the time their own next release ships.

1 Like

The problem is management of momentum.

For unstable features the rust team would like some exposure to get some feedback.

But if it gets out of hand, a large fraction of the community could go all-in and take unstable features for granted. It will then add a penalty to change unstable features, which will slow down the development of Rust itself.

So the primary goal IMO is to guarantee that the rust team can always iterate quickly and without much hesitation on unstable features. This is whats better for the community too, in the long run.

We only want to give unstable features so much exposure that the rust team gets enough feedback and not a bit more. Otherwise we will all pay a very high price for locking down the options of the rust team to not step onto anyone’s toes.

Approaches to language experimentation that IMHO worked:

  • JS experiments with new features via source-to-source translators (Babel, previously coffeescript). It’s fine, because everything compiles to the lowest common denominator and can interoperate more or less seamlessly.

  • Swift offers auto-conversion to the newer syntax. Everyone is basically forced to upgrade, so there are no laggards. It’s a bit annoying, but the fragmentation is temporary.

  • PHP has a very long deprecation period, during which the feature is gradually made harder to use. Misfeatures are removed after all software using them is obsolete.

3 Likes

The problem with using beta for this is that beta’s job is to be a release candidate for the next stable. This would change its raison d’etre significantly.

Of course, we could always have four release channels…

2 Likes

True. And I’m sceptical myself to loosen feature gates even on beta. My only argument is, that beta would be more appropriate than stable, because:

  • the current meaning and promises of stable are preserved (I see this as a holy cow ^^) - similarly to your objection to change the ‘reason of existence’ of beta, but I think loosing stability guarantees on stable would be an even more radical and problematic shift of meaning. Also ‘beta’ already has a connotation of ‘may break’ and ‘may change’.
  • you don’t have to wait as long to get early feedback on new features, since they should hit beta ~6 weeks before stable, right? That’s quite a latency difference in terms of iterating over a new feature and waiting for feedback. IMO this alone is a knock-out criterium for doing it on stable.

If you don’t want to muddy the purpose of beta, but introduce a fourth channel instead, it’s logical place should be somewhere in-between beta and nightly.

I am only speaking hypothetically here - I still believe it’s not worth it to loosen feature gates at all. Less radical options like the suggestions from @tshepang should be explored first before potentially opening some kind of pandora’s box. :slight_smile:

The root of the issue is avoiding this problem:

it's still possible for you to upgrade to a new stable version of Rust, and find that one of your transitive dependencies no longer compiles.

So as long as rustc updates do not break those crates using older versions of unstable features everything is fine. For example, one could allow specialization on stable like this:

#![feature(specialization, 1.16.0)]

which says, enable the specialization feature with the semantics it had on Rust 1.16.0. All future compilers would need to support this forever in order to be able to use crates.io properly. If the compiler team is willing to commit to this, I think it would be ok.

Two notes:

  • future Rust compilers might only support Rust version >= x.y.z, so they might be able to start with a clean slate.

  • most features close to stabilization don't change that much across releases (or at all), e.g., specialization, impl trait, function traits,... so at least for "stable unstable features" the burden of maintaining backwards compatible implementations in the compiler might not be that high as long as the compiler is well refactored for this purpose.

And a remark:

  • This brings up again the topic of the relationship between the rustc version to whether some crates compile or not (e.g. the specialization example above wouldn't work with a 1.14.0 compiler), and in particular, that crates.io doesn't know which version of rustc is required to compile which crates... Anyhow, I could imagine a user passing a special flag to rustc via cargo that says do not enable unstable features, or provide all feature behavior for rustc version >= x.y.z. only, to speed up compilation times.

That sounds to me like a feature :)! Using a new feature after it's stable won't be, but while its unstable it will be. So libraries both can freely adopt new stable features---necessary for continued evolution of the language, but regular semver bounds will keep out extra instability---necessary for avoiding dependency hell.


As an aside, I've been musing an ideom for "provisional/experimental releases" [I suppose "release candidate" is the normal word for this] for this on crates.io. Imagine this wonderful scenario:

  1. Library author sees/designs cool new feature
  2. Library author makes release candidate using new feature
  3. Downstream tries out new RC, and reports back to library author yay/nay
  4. Library author reports back to Rust yay/nay on new feature
  5. Rust team, with this evidence, stabilizes the feature with confidence
  6. RC becomes actual release of library
  7. Experimental branches downstream can be merged; more cautious users can try out new library version + Rust version with confidence.

This story leads me to refine the propagation rules a bit: library authors should merely assert that there exists a build plan for their library (e.g. non-normative lockfile) where only the whitelist of features is used, whereas final consumers should assert that for all build plans, only their whitelist of features is used.

For example, if a library author has their own project using the library, and they add a new feature to their library (internal branch or RC) they shouldn't need to fork other libraries in their project just to add the new feature to the whitelist. (Demo app -> 3rd party library -> my library) dependencies would necessitate this with my original plan.)

@comex Also thanks, your point about separating implementation instability from design instability is the linchpin to why the status quo is too painful; I tried and failed to grasp any phrasing for it, let alone such a succinct one, when writing my first response.

I think there are several potential use cases that need to be considered

  • An unstable feature allows for more efficient code, but it can be emulated with a performance cost in stable code. In this case, a library might want to offer both versions, and allow users to choose between the implementations depending on their stability desires. And likewise, this needs to be propagated through transitive dependencies (i.e. a library depends on a library depends on a library)
  • A feature is stable on rust >= x, but a library offers an implementation based on an unstable feature to support compiling on rust < x. Of course, this won’t be too useful without Cargo learning about rustc versioning.

On a side note, the first case reminds me a lot of the safe/unsafe split. There are a lot of bits of unsafe code (like Vec::get_unchecked) that can be emulated in safe code with a performance cost.

When you say that sounds like a feature, do you mean that library authors would be expected to issue a major version bump when introducing new unstable feature dependencies? If so, that would be extremely strong disincentive to do so, thereby defeating the main goal of the effort.

To be clear, my main point is that whatever we do, we want there to be a clear notion of "fully stable Rust" which allows you to update the compiler and semver-compatible versions of libraries without fear of breakage.

This seems to suffer from the same problem as @Ericson2314's proposal, namely that introducing an unstable feature to a previously stable crate is a breaking change for all of its clients, which is a very strong disincentive.