(This is not a new topic;
at the bottom, I link to what past threads I found for this)
With Rust, we strive for Stability without Stagnation.
The Project has been only supporting the latest version of the toolchain.
We use the Edition system so language and tooling evolution is opt-in, avoiding the need for people to be on old versions because of breaking changes.
We keep a short development and release cycle which lets us quickly get feedback on changes and turnaround fixes so if there is an upgrade blocker, it doesn't have to last for long.
However, there still might be situations where an LTS makes sense.
Costs for alternative Vendors to validate the toolchain, like Ferrocene
Costs for companies to re-validate the new toolchain against their specific application
Think mission critical applications where sending bad data over the wire can mame or kill someone
Help alternative Vendors to coordinate on versions and likely consolidating their work in backporting
Help the community coordinate with each other and with alternative Vendors on meaningful MSRVs to support
This doesn't mean everyone has to have an MSRV
This doesn't mean everyone has to support MSRV in their main branch, they can also do LTSs
I suspect some unstable users, like Linux would benefit from extended support so they can control when they pay the upgrade cost (yes, that means we'd be giving some tacit nod to RUSTC_BOOTSTRAP if this were a high motivation / priority)
If we moved forward with an LTS, what problems could we run into?
To start the conversation, some I came up with include:
Process for even deciding on this as this affects nearly all teams
Needing contributors across affected teams to backport and review changes
This can come up at any time, disrupting the flow
This isn't a company where we can direct people's work
Have a Foundation pool of money for companies to put into to fund people separate from project work to support these efforts?
Increase load on the release team
Simialarly leverage the "LTS fund"?
What category of problems get backported (bug fixes, security fixes, etc)?
What is our cost and risk profile for selecting what can be backported?
Backporting a trait solver to resolve a soundness issue would likely be costly and increase the risk of regressions
Aligning with Editions helps with our overall "consolidation" messaging with Editions (documentation, etc) but they can be a bit chaotic, things slip past, and biting off 3 year cycles from the beginning may be a bit much
Annual makes it easy to predict and align on but our point-release cycles don't line up with calendar years
As an aside, annual LTS could be helped by stretching our releases from 6 weeks to 2 months.
This would also make the releases more predictable, not requiring a lookup in a table or becoming an expert in calendar math.
We'd also need to answer the question of which release in the year would be the LTS
A "Rust year" of 10 releases is approximately a year and makes it easy to predict from a numbering perspective (every 1.X0 release)
Support N versions back or only an LTS?
Limiting to just an LTS is a smaller commitment for us to see how this works
Should we support overlapping LTSes like in RFC #2483?
Limiting to just one LTS at a time is a smaller commitment for us to see how this works
As rust-analyzer has features coupled to one version (the latest), will we need to make adjustments there, putting more burden on that team?
How do we minimize people over-indexing on this for what it means about how stable and safe to upgrade Rust is?
Maintaining an MSRV is hard enough and this pushes people to having an MSRV
Support for an LTS depends on your package's maturity and intended audience
How do we minimize the stagnation costs this may cause?
One angle of this: Stable Rust has become powerful enough that its hard to find people willing to test nightly features, if people overly rely on an LTS, who will test stable?
With RFC #\3537, it would be easier to maintain an LTS of your package and have people use the LTS or latest
People can use polyfills, whether once cfg(version) and cfg(accessible) are stabilized or by leveraging an MSRV-aware resolver being on by default from RFC #\3537
I think the major past discussion missing here is the "Add new channels for long term support (LTS) releases" RFC. Six years ago. The RFC itself is a bit light on the details, but there is a lot of discussion there. Perhaps some things have changed since then.
This will also have a wider ecosystem cost outside of Rust and Cargo themselves: Peoples expectations of MSRV etc will change, especially for central crates. This may not be a burden the maintainers of these crates want to take on.
I know libc has been discussing MSRV recently for example, and one thing they were talking about was -2 (though there seemed to be a fair bit of opposition against t it being that tight). This would effectively block that from being choosen.
But other wildly used crates have that tight MSRV. Clap for example has -2. That would probably not be viable if rust started doing LTSes.
Another effect is that many users who don't need to be on LTSes would likely default to it (because they are used to doing that for things like Ubuntu and don't know that Rust is way more stable). This would again slow down the overall ecosystem.
In summary, I don't think it is worth the downsides.
Doing an LTS would require pretty major changes to Cargo. Currently it has no concept of "greatest supported version", and always chooses the latest crate versions, even if they are for whatever reason incompatible with old Rust toolchains (e.g. they use new language features). This means that trying to stay on an old Rust release and to use cargo-based dependency management is often an exercise in frustration. You need to have all dependencies pinned, and upgrade them very rarely and very carefully.
Besides the changes to Cargo, it would require major changes from the ecosystem as well, since generally most people expect the end-users able to easily use the latest Rust toolchain (or at least Latest-N). I find it unlikely that the ecosystem at large would be happy or even make it possible to stay on an old Rust toolchain for a long time, and the only scenario where it happens is imho if most foundational crates explicitly pin their used Rust version to an LTS, ignoring all new language development.
That's mostly the same issue that LTS Linux distributions face: the packages are all stuck on some ancient outdated version, and the only way to upgrade is usually to do it for an entire distribution. But at least in that case there is established process and infrastructure to test & pin old packages, and there are people whose job is to provide compatible snapshots of the ecosystem.
This is an accurate prediction. Now it may sound a little harsh (that is not my intention though), but as a crate maintainer myself I can tell you all that even if this RFC is merged, I'll be 100% ignoring it.
The reason for that is that it would incur costs for me (e.g. ensuring that my libs work with such LTSs, and even if others ended up doing that, I'd still incur the costs of needing to merge patches that honestly have no value for me), but wouldn't give me anything in return that is remotely worthwhile.
And I likely wouldn't be alone in that assessment.
Other crates still wouldn't even be able to comply, e.g. because they need features that are more recent than the latest LTS.
Bottom line: many crates within the ecosystem would not work with such an LTS release, therefore kind of defeating the point I think.
How would you handle bug fixes? A fair amount of bug fixes require non-trivial refactorings to become feasible, but doing those refactorings would defeat the whole purpose of an LTS release. And even for bug fixes that don't need non-trivial refactorings doing a backport may actually decrease stability as the original fix could have made an assumption that would not result in a compilation error, yet would still manifest as an ICE at runtime. AFAIK we already regularly run into this with beta backports. Doing LTS backports would make this problem so much worse. If on the other hand you don't handle those bug fixes, you could just take a snapshot of an arbitrary release, freeze it and call it a day. See also
Considering I'm the maintainer of clap and the person who proposed the libc MSRV change "liked" this post within minutes of it going up, the situation is a lot more nuanced than looking at the landscape today.
A big problem with the MSRV conversation is that most MSRVs are based on vibes and hearsay. These make it difficult to make a decision around an MSRV.
If instead the conversation was on
What fixed point do you support (stable, lts, debian stable)
What upgrade grace period do you offer by delaying the update to your fixed point by N releases?
this dramatically simplifies the conversation.
There will still be a cost. The MSRV RFC is actually making me consider extending my MSRV support policy. The caveat being that it would be offering more LTSs of clap and others. So long as I leave space for backporting changes by bumping minor on MSRV change, the overall support load can for an LTS for clap is relatively low. I have been offering LTS support for clap v2 and v3 and I think it involved maybe 2 PRs or so.
Not everyone will have to pay that cost. The fact that we offer an LTS doesn't mean people have to support it. Its more indicative of the maturity and target audience of the package for whether it supports the LTS or not.
Cargo does have a "greatest supported version" but you should almost never use it, especially for supporting an MSRV / LTS.
Good news though is that we are wrapping up on RFC #3537 for defining what the behavior should be for an MSRV aware resolver. In the meant time, Cargo has had unstable support for an MSRV-aware resolver since August last year to unblock people who need something now.
For applications and new users, yes. For a lot of established packages, like tokio or clap, they do not assume people will be on latest.
Rust-analyzer is a big case where the assumption is people are on stable and that is called out.
An LTS doesn't have to mean stagnation
People can choose not to support the LTS. Thats fine. I'm expecting my applications won't.
People can maintain their own LTS. Clap maintains v3 (and maybe still v2?) as an LTS and I would likely extend that to "the last release that supported an LTS" while keeping main moving forward
Polyfills. While its unclear when cfg(version) / cfg(accessible) will be stabilized in rustc and then supported in Cargo, we realized yesterday that having an MSRV-aware resolver enabled by default allows you to get that behavior
I'm incredibly pesimistic about this. There will be expectations by application users (who are not clued into how the rust ecosystem works) which create unnecessary conflict when they aren't met.
This will create a pressure on libraries from those application developers who give in, which in turns leads to stagnation in library APIs. I'd much rather see libraries update and use the latest and greatest features.
Would an MSRV aware resolver help? Absolutely. But that still increases maintenance burden due to pressure to backport bug fixes and maintain multiple releases of a library.
The fact that you are okay with this as a clap maintainer doesn't mean that everyone is. I have no idea what the majority of maintainer would think about this (I'd suggest a poll if this idea gets that far).
Also as @bjorn3 pointed out, LTS backports tend to be less than stable. Anecdotally I have had way less bugs using Arch Linux than with Ubuntu LTS. And the few bugs I have had have been very easy to resolve (roll back a single package for a week or so until it gets fixed) while for Ubuntu things often haven't been fixed until the next LTS.
It is not just that rust LTS will likely be buggier. The whole LTS crate ecosystem likely will. (EDIT: if it is based on multiple branches and backports that is)
Another aspect is that it will take resources away from new development that actually matters more to most people. If a developer is spending time on a complex backport, that is potentially time not spent on getting some highly wanted feature (coroutines, min-specialisation, pick your favourite feature you want) ready for stabilisation.
As you yourself pointed out, how do you motivate people to work on backports in an open source community? I don't think you even should. Rather leave it to Ferrocene (if this is something they actually want to do) to backport and maintain LTS. The community at large shouldn't have to pay for something very few of us actually want or use.
Language stability is a virtue in itself. It means that people working in the language don't have to continuously re-learn what's possible, and don't have to worry too much about whether everyone trying to compile their code has a suitably new compiler. I have been known to say that the 10-year release cycle for the C standard is the one and only reason why C might still be an appropriate choice for new programs.
MSRV would immediately become a lot less important if new language and library features came out at the same cadence as editions, rather than continuously. Compiler-only updates -- bug fixes, code quality improvements, compiler speed improvements, better error messages, etc. -- could continue to happen at the current pace.
"What is the oldest version of each of my dependencies that my program can use, including the core language, taking into account cross-compatibility issues?" is only half of the question.
If you care about minimizing the amount of time anyone in the future (including yourself) will have to waste on updating your program for dependency churn, then you also care about making sure that your program is not artifically constrained on the high end of the dependency range. In particular, Rust's default of not allowing updates across major version boundaries (e.g. foo = "1.0" doesn't allow an automatic update to v2.0 of foo) may be undesirable. Yes, v2.0 may break your code, but an awful lot of the time API breaks only affect some uses of the dep.
Version-range dependencies aren't even really the right way to model the problem. What would it take to model one's requirements in the style of a caniuse feature matrix?
That would definitely mean stagnation, in the worst possible form. Especially since rust is such a young language with many great features still in the work. Maybe it will make sense in 10 years.
Besides even for C/C++ that isn't how it works. GCC/Clang will have support for c2x that enables draft (or mostly stabilised) features of the next version. When the new standard comes out not everything will be implemented, but you can go look at cpp reference and see if a specific version supports a specific part of the standard you want.
So for actual implementations it is more like Rust, except you have to opt in to the upcoming edition. But rust is not a standard comité driven language so it wouldn't work the same.
Gradual stabilisation has more advantages: learnings from stabilising previous features can go into future features more easily than if you stabilise in big chunks. It is more likely that you will find unsound interactions between features too (most people don't use nightly, and those who do use as few nightly features as they can get away with).
I don't think this is a problem in practice. Dependabot / renovate takes care of non-breaking changes, and breaking ones aren't that common. As an application developer I estimate that I spend less than half an hour a month on average on upgrades. Not a big problem. I don't think I had semver changes that were actually even breaking changes for me this year yet.
It is indeed a problem that I can't specify somehow that a certain major version is not a breaking change for my use case (possibly transitively). That would become even more of an issue if you try to support LTSes though.
Having short development windows means we generally have to assume master is releaseble at any point which puts neutral pressure on people to not merge as stable half-implemented features. This avoids burnout of having to go back and "fix everything". If you look at the links of Prior Art, most projects actively staff alpha, beta, and release candidates.
This also allows people to test features out and provide feedback in a timely manner which makes it easier to fix things in a timely manner. If we have a 3 year window, then a fix won't be available to you for 3 years. We already have a problem that "stable" has gotten so good that we don't have enough people testing nightlies to get feedback on unstable features.
This also allows us to remove the pressure of deadlines which helps avoid burn out. When you have longer gaps between being able to release features, it becomes more critical that you hit or it you have to wait a significant amount of time. If I miss one 6 week window, the cost of waiting 6 more weeks is trivial.
I think there was a discussion on this before (but I don't know what happened to that): what if we had semi-stabilised features? Things could change but there would be a deprecation period for removal or migration paths so there wouldn't be an overnight rugpull. While I don't have any interest in using nightly I wouldn't mind such a semi-stable approach, based on regular releases (maybe a specific flag in stable to opt in to this sort of thing, maybe I'd have to be on beta).
Going in that direction seems far more useful than LTS.
(In my experience what actually happens is people opt-in to specific C99 or C11 features that they want, when available, with polyfills, but the code continues to work with older compilers. IMO this is a desirable state of affairs.)
I am completely fine with waiting three years (or more!) for a new language or std feature and I don't think that makes me weird. What @Vorpal keeps calling "stagnation" I call "sensible engineering conservativeness."
I recognize the need for feedback on new features but I think we need to find a way to do that that doesn't involve new features being added to the core language on a six-week cadence. Lean harder on prototyping in crates, maybe?
Then don't use them. One of the goals of the Edition system is to serve as a sync point for marketing / documentation and act as a meta or virtual release that encompasses the last 3 years.
Thank you so much for this, very interesting read! I think that this article (and the others you linked) is pretty damming for this idea of a Rust LTS.
The poor stability in practise of LTSes and the wasted effort that could have been better spent on the up-to-date version is something I haven't seen @epage really address here in his replies, but maybe I missed it. I believe it is important to respond to all points of criticism, not just those you think you have good answers for. (I don't know that is what happend, but that is what it looks like, I hope there is a better explaination).
To be clear, I'm not trying to have all the answers but to uncover problems like this so people can know what the problems are if we should move forward with this. This is why I'm fine saying "we have a question for this" until. someone does try to answer it.