I seem to be relatively alone with this view, but from a user perspective, I’m really against any kind of semi-stabilization.
The nightly system has it’s issues, but I like the way it works because:
- it draws a very clear line between ‘this is stable and definitely won’t change’ and ‘this is experimental and under development’
- it is comparatively easy to evaluate and understand code (snippets) and a crate: if it uses any features, I know it can only be used with the nightly compiler and includes experimental features. Otherwise I know it’s stable and can be relied upon long term.
Even with this distinction, the feature system is already quite opaque/confusing for users less familiar with the language and implementation process. It’s also quite hard to get an overview of existing features and what they are used for. (There is the unstable book, but I don’t think many know about it, there is no distinction between minor changes and major RFCs and many features lack a decent description).
Introducing another layer of stabilization (with new attributes) risks making this quite a bit more confusing . The “so this is pretty stable and probably won’t change much but there is no guarantee” seems like a suboptimal system to me.
It also risks features being stuck in a “semi-stable” status for extended amounts of time and and things becoming de-facto stable due to the amount of code written, and reduce the pressure to get things finished because they already work on stable anyway.
A cautious tale here could be Haskell where the numerous (sometimes incompatible) language extensions make checking the enabled ones a prerequisite for understanding a lot of code and are one of the worst aspects of an otherwise great language for me.
It’s already quite trivial to use the nightly compiler in Rust compared to other languages (if you pin the compiler to a known good version) and many have used it for extended amounts of time.
Specifically WRT to async/await, I’m quoting my Reddit post on the topic below:
I think a “early adopter stabilization” would be the worst path to take right now.
It would seem to me like punting on the (major) question of syntax which will be final, at least for years, and also like rushing out the door just to have something stable.
Stabilizing early with a stop-gap syntax and usability issues won’t help anyone, and will only create problems:
- documentation issues with posts/docs/SO answers describing the initial syntax
- frustration due to usability gaps
- a “let’s hold off until things are actually stable stable” attitude from the ecosystem (specifically tokio for example)
I understand the urge to get things stabilized. But this is a hard engineering and language problem. In other ecosystems it has taken years and years of discussing, implementation and tuning to reach a solution (eg the recently accepted C++ coroutines).
I always thought that the rush to stabilization was a bit ill-advised. Let the feature bake in nightly as long as it needs to. Iron out usability issues, get practical experience to uncover API/design issues. Try to get error messages as helpful as they can be without additional aides like existential types.
I also understand that this is technically challenging thing to work on, hence the lack of contributors. The back and forth with the Pin + Future APIs, syntax discussions, etc are probably also probably very exhausting/demotivating for contributors.
But early adopters lived with nightly usage for years and can stick to nightly for now. After all, that’s exactly what nightly features are for. It’s already really easy to use experimental features in Rust compared to most other language ecosystems.
Slow and steady wins the race for me on this one.
- let things bake on nightly until involved parties (lang team, compiler devs, major ecosystem stakeholders) are sufficiently happy with it, at least for the medium term
- ensure good documentation in the book, in std, prepare blog posts etc
Then stabilize.
If this only happens in 2020, so be it.