The thread about possibly doing LTS releases of the toolchain seems to have reached a consensus that there is no particular appetite for doing this internally, and also it’s unclear whether an LTS would even be solving a problem that people have. At the same time, that discussion did turn up a bunch of problems that maybe should be solved, and it seems to me that solving them might also address the scenarios that led @epage to start the earlier thread.
Here’s my summary of the “bunch of problems:”
-
The language does not make it easy to support both old and new versions of either itself, or dependency crates, at the same time. (That is, it can be difficult or even impossible to write “polyfills” for added functionality in new versions of one’s dependencies, particularly across breaking changes.)
-
People working on the core language are frustrated about not getting enough feedback on features in development, and are reluctant to do anything that will make the process of pushing new features out to stable any slower than it already is. To some extent this frustration is shared by all library crate authors as well.
-
It can be very difficult for downstream repackagers to stay up to date with the toolchain. As a consequence, crate authors face demand (in the economic sense) from repackagers and end users to keep their code working with old versions of the toolchain, even if they have no other reason to do this. Continuing to support old toolchains may mean continuing to support old versions of dependency crates as well.
-
It can also be very difficult for anyone doing software development in Rust to keep up with the pace at which the language itself is revised, and the pace at which their own dependencies are updated. This is particularly bad in contexts like “mission-critical and functional safety systems” (quoting Ferrocene’s mission statement) where any change at all may involve repeating expensive, time-consuming certification procedures.
In the earlier thread I said
MSRV would immediately become a lot less important if new language and library features came out at the same cadence as editions, rather than continuously.
I still think that the basic idea there has merit: if we could cut the frequency of “have to spend some time catching up” events from once every six weeks to once every three years, that would make problem #3 much less troublesome and would also partially address problem #4. However, it would obviously make problem #2 even worse. Since I wrote that, it’s occurred to me that we can have it both ways. Like this:
Make toolchain updates trivial, using stability guarantees
Suppose we start guaranteeing that:
-
All toolchain releases that have edition N (e.g. 2027) as their highest supported edition, will be buildable using the first toolchain release that had edition N-1 (e.g. 2024) as its highest supported edition, and using a snapshot of crates.io and all non-Rust dependencies (LLVM most importantly) as they were on the date of that first toolchain release.
This will mean repackagers only have to worry about updating the dependencies of the toolchain itself when the edition changes. Repackagers who maintain a chain of bootstrap compilers going all the way back to what mrustc can handle will no longer need to include every single release in that chain, only the first release from each edition. That should pretty much eliminate problem #3.
The principal cost of this guarantee is that the compiler proper will need to maintain a MSRV that may be as much as six years old. However, this applies only to the compiler proper, not to
std
orclippy
or any other toolchain component that gets built using the new compiler. I would like to think it is not too much to ask of the compiler team, and I suspect it could be made substantially easier by addressing problem #1 (see below). -
The set of language and runtime library features that are enabled by default will only ever change when you choose to change the edition your own code is compiled with.
This means developers will no longer have to worry that doing development against the current toolchain will cause them to break builds using their MSRV by accident, which is the big reason I see for why one might hesitate to update one’s own personal copy of the compiler.
We already make strong guarantees that code that does compile using toolchain release 1.x will still compile using release 1.y for any y > x, unless you change the edition. Think of this as extending that guarantee in the opposite direction as well: anything that compiled using 1.y will compile using 1.x, back to the point where x is so old that it doesn’t support the selected edition, unless someone opted into a language feature early (see below).
The cost of this guarantee is that we substantially slow down the rate at which new language features reach “production,” exacerbating problem #2. I think we can fix that by making it easy to opt into pieces of the next edition, as it were. We already have that mechanism for experimental features, we “just” need to extend it to stuff that’s stable but not yet enabled by default.
Extend #[feature]
to stable features not part of the current edition
This is the way we avoid exacerbating problem #2 while addressing #3
and #4. If you want to use something that’s stable, but not part of
the current edition, you just ask for it with #[feature]
, in the
same way you would have while it was experimental.
This is a tiny bit of friction, but it pays for itself in that you no
longer have to guess what your MSRV is. It’s the first version that
supports your selected edition plus all of the #[feature]
tags you
enabled, or the maximum MSRV of all your dependency crates, whichever
is newer, and Cargo can (should be able to) calculate it for you.
This also takes a step in the direction of allowing people to use not-yet-stabilized features with the stable toolchain, which, on net, I think would be a good idea (for instance, it would reduce problem #2 even more).
The cost of doing this—which we are already paying today, by default, I’d like to point out—is that you may be forced to take an MSRV bump in order to update a dependency crate. But this is just a special case of problem #1! You only need to update a dependency crate when you need either a bug fix or a new feature from that crate, but the ecosystem pushes people toward updating eagerly because it’s difficult to support old and new versions of dependencies at the same time. So the solution is to address problem #1 and then encourage people to keep their dependency ranges as wide as possible.
Invest in language features that facilitate polyfilling and wide dependency ranges
I know that polyfilling is not easy today, but I don’t know what would help, except in the simplest cases. One thing that I could have personally used not so long ago was the ability to inject a polyfill definition into someone else’s impl:
#[polyfill(feature(int_roundings1))]
impl usize {
pub const fn checked_next_multiple_of(self, rhs: Self) -> Option<Self> {
match try_opt!(self.checked_rem(rhs)) {
0 => Some(self),
// rhs - r cannot overflow because r is smaller than rhs
r => self.checked_add(rhs - r)
}
}
}
Because this isn’t currently possible, I had to make the polyfill a
free function, and that means I’ll have to change the caller if and
when that particular crate’s MSRV rises to 1.73 or above, instead of
just deleting the compatibility definition. It also means I don’t
get std
’s definition of checked_next_multiple_of
even if it’s
available.
I don't think we need to solve polyfilling in order to make any of the other changes I suggested, but I do think it needs solving. Probably the right thing at this stage is to start a small working group on the topic to hash out what features would ideally be added.