What if we did LTSs?

Apart from C, C++ and possibly Fortran and Ada (which I don't know) I don't know of any language that moves that slowly.

Go, Python, Julia and Ruby all have around yearly or bi-yearly releases I believe. So more often than Rust editions, but not as often as rust releases. I know Python recently (past couple of years) switched to a tighter release schedule, though I don't remember thr details off the top of my head.

Haskell apparently doesn't have a fixed schedule (or didn't in 2017, couldn't find any newer info). Erlang is yearly.

JavaScript in practise is on the other extreme, whenever Chrome releases is when you get new features in JS (as much as I use and love Firefox it sadly doesn't matter really any more). That is every 4 weeks!

Maybe you could point to something like the language defined by bash or zsh, they change rather slowly. I would argue they have either stagnated or are considered effectively done by their authors. The last major feature I can remember in bash was associate arrays and that was probably a decade ago at this point.

Then there are of course languages that are in maintenance mode or are otherwise past their prime. I don't know that Perl gets many new features for example. It is also not a language I know of any new projects picking at this point. It will be with us for a long time, because there are a lot of tools still in it.

PHP is in the same boat, it won't die until Mediawiki and Wordpress switch away (and a bunch of other things) but it too is past its prime. It is not something choose for new projects either any longer.

I could go on (Cobol anyone? Probably pretty slow). And there are many languages I didn't check (java, c#, kotlin, swift, ...). But I think we can draw some conclusions:

  • Rust is definitely on the faster end of the spectrum (though JavaScript beats it out).
  • C++ (and C even more so) are on the really slow end of the spectrum.
  • The majority of what I would consider popular active (significant new development on and in the language still) languages seems to release once or twice per year.

I personally think rust is fine where it is, maybe you could argue that you want to adjust it to something that evenly divides the year though. (I don't think it really matters when exactly the release is calender wise though, but maybe it does to someone.)

5 Likes

I think there's a dimension to this that I haven't seen discussed yet: The age of the language.

As alluded to before, Rust is a young language, and has quite a few gaps in its functionality. Closing those gaps requires one of the following:

  1. Keep merging features as soon as they're ready, as is being done now. This implies maximum evolution velocity under the usual constraints of FOSS projects (e.g. the number of contributors, and the knowledge those contributors bring to the table)
  2. Wait with merging features until some Blessed Moment. This is stagnation, and changing what we call it doesn't change the nature of the beast.

Given the age of the language option 1 makes a lot more sense to me. Comparing to C is a bit of a red herring because C is an Ancient™ language i.e. it's pretty much at the end of its evolutionary road. The changes being made at all are pretty minor and are things C authors could definitely do without, as evidenced by the fact that C has been a productive language without having them available. Rust is a lot younger, and as I said has gaps in its functionality. Insofar there's a mutually exclusive choice to be made, filling in those gaps (i.e. enabling new use cases for Rust) is more important in the short term than catering to a subset of the community, because the former serves the entire community while the latter decidedly does not.

This works for library features, not for language features. And for library features this is already being done.

7 Likes

I find it plausible that adding more features to the language may serve a larger portion of the current Rust userbase, but I don't think "the entire community" want or need new features (for many projects, Rust has been a useful, productive language for years already), and evidently some nonzero minority do want more stability instead.

(I, for one, am not interested in new language features, though I readily acknowledge that there are other people who do want or need them.)

Perhaps not initially. But that changes pretty fast when someone in that group wants to use a crate that couldn't have been written without such features.

Stability has never really been Rust's issue AFAICT. The slowdown being asked for has nothing to do with stability. In fact, as mentioned before in this thread, such LTSs can actively lead to less stability overall, with the analogous case being LTS releases of the Linux kernel, as well as various Linux distros. In the case of Ubuntu I can attest to this personally. It's one of the 2 major reasons I want nothing to do anymore with software produced by Canonical.

Or is there a lot of code written in 2016-2017 that doesn't compile anymore? If so that would be interesting to explore in a different IRLO thread.

4 Likes

Rust Editions are not a version of the language. Edition 2015 has been getting new features added every 6 weeks for the last 9 years. You can write Rust code that is compatible with Edition 2015, but is too new for Rust from 12 weeks ago.

4 Likes

I think the current Rust stable is a more complete and usable baseline, than it was a few years ago when LTS was discussed previously. Upgrades now are less often must-haves that patch big holes in the language, and more of nice-to-haves that improve its usability. I think a 1-year LTS would be tolerable, although I still strongly prefer rolling releases.

A lot of changes in Rust now are just standard library changes, many of which are useful, but technically trivial (like .inspect_err()).

JavaScript solves this with "polyfills" that backport such trivial functions to older browsers. In Rust that is not as seamless. Rust version checks need a build.rs, traits need to be imported, and need special care not to cause type inference ambiguities or unused warnings in newer Rust. It's easy to add method, but hard to update behavior of an old one. It's not possible to backport new enum variants.

I wonder if it would be possible to decouple std from the compiler more, and make it possible to patch or backport it to old compilers. Maybe support running of proc macros on entire modules or crates, which would be able to rewrite some new syntax to old Rust? (e.g. #[default] on enums is neat, but it doesn't need newer Rust, only a newer proc macro).

edit: std immediately adopting cutting-edge/unstable rustc features may be a source of problems too. It makes bootstrapping and backporting harder. It also forces alternative Rust implementations to implement more features than needed for Rust stable, if they want to reuse std and pass its test suite. So if you want to make an LTS, I dare you to start with libstd first :slight_smile:

4 Likes

While it'd be great to have cfg(version) and cfg(accessible)` in the language and cargo, an MSRV resolver can be used to make polyfills.

1 Like

FWIW, I'd expect a rustc/sysroot LTS to be more a communication thing than anything else. Only fixes for security issues in the tooling or incorrect behavior of correct code would potentially get backports. Any changes that accept new code are a no go, because then old-conservative-dev might be forced to update their tooling. This class of consumer really does want "stagnation;" or rather, they want to never be forced to stop and think about their tooling, and for it just to exist as-is through the scope of their project.

If people want more improvement out of LTS than that bare minimum, they don't actually want LTS, imo, they want stable to be called LTS so that people looking for "long term support" for their tools (rather than long term compatibility) are satisfied.

Perhaps strangely enough, I think aligning releases to the calendar month could improve image there. The current cadence leads to that any cargo add or cargo update could surprise you with "your toolchain is outdated; you need to update right now!" Aligning rustc to the calendar doesn't change libraries staggering adoption of new API surface, but it does make it a bit easier to plan around. MSRV-aware dependency resolution may also help, or it might actually amplify the annoyance.

To reiterate, I think a good portion of LTS desires is fed by a desire to avoid unexpectedly being "forced" to update the toolchain to keep their project compiling. However false that notion may be (relying on wanting latest and greatest from the entire ecosystem except for the sysroot), I firmly believe that this is in no small part a motivating factor. Nobody likes being told that in order to X (update dependencies) they need to first Y (update toolchain) irregularly. If Y were always a prerequisite to X, it'd be less of an issue; it's the random presentation of a roadblock that wasn't there last time that's the irritant.

6 Likes

I used to maintain standback, which is based on JavaScript's ideas of polyfills. I stopped updating it because I was the only one using it.

1 Like

Hadn't known of it before. Looks like it requires manually managing what level of the standard library you need which is less than ideal. Also unsure how much id want a all-in-one polyfill or small ones due to build times (seeing as build times was the reason i quickly adopted 1.70 in clap for IsTerminal)

Do they need help from rust upstream for that? Rust already ships known soundness bugs. LLVM has known miscompilations. Rust also does not aim to be resilient against malicious inputs. Most of the time those issues are not urgent, sometimes they persist for years. Fixes arrive when they're ready.

So what would an LTS be patching here? If known bugs can be tolerated for years why would they suddenly have to be backported? ISTM that it would mostly involve security bugs in the standard library or severe miscompilations or compiler crashes. The former usually don't require changes to the compiler so they should be backportable. The latter are usually identified early in the release cycle and get a point release.

Basically, rustc doesn't need the endless patch stream that the linux kernel or a web server might need. So if someone wants to keep using an older version they can probably just do that. With all the bugs it already has.

12 Likes

Agreed, and further points in favor of this: I don't think I remember ever seeing a full on security bug in rustc itself (CVE etc). I do believe there was one in cargo though about file permissions on downloaded crates (some time last year). Since part of cargos job is downloading things that makes it more security sensitive. Same goes for rustup. And if you stuck to an old version you might need those backports.

But the actual known bugs in rustc? Soundness, compiler crashes and ICEs yes, but security issues, not really. And usually there is a simple workaround for these: "don't do that then".

It might be interesting observing why rustc doesn't have security bugs. I believe the answer is simple: Rustc does not form or maintain a security boundary between different levels of trust (proc macros make that impossible even). This means all input is trusted. As such there is very little room for any actual security bugs.

(That is (kind of) an argument against proc macro sandboxing that has sometimes been suggested: it would be a security boundary. Which would mean dealing with CVEs and threat models and all that. Nobody wants that churn. :wink:)

1 Like

JS polyfills and transpilers are most useful ime because they cover ~everything in a standard/target, so you ~never need to think about what you're using. So I think the equivalent for Rust std would want to include ~everything.

For the comptime conscious, (default on) features (likely at the same granularity they were on nightly) would buy back some, and using the latest compiler (given MSRV resolver based polyfills) would buy back all, modulo the recomp for crate configuration changes. (Single package multiple lib and incremental even further mitigating that; it's a far off dream that encapsulated, generic-unseen transitive dependency updates wouldn't require recomp, only relink. We pay for the static linking defaults here.)

But honestly, the "solution" for std is obvious, if still quite difficult: std-aware cargo, and a std which is (at least somewhat) independent from the specific rustc version. That's basically what standback or other polyfills are doing: taking the parts of std which aren't tied to the compiler and trivially "porting" then to work on older toolchains.

There was one — original plugin functionality would indiscriminately load dylib plugins from a directory.

2 Likes

With my "security response" hat on, I agree -- we have absolutely made that argument, that we don't want to consider proc macros a security boundary.

2 Likes

That's an unfortunate incentive misalignment; one of the ways to get better security is to build (sandboxes|sanitizers|validation layers) which, even if they are not a perfect protection against arbitrary malicious code, make it less likely that “this code has been perturbed by an attacker into producing weird output” can be manipulated into causing arbitrary side effects on the other side of the boundary.

I don't mind adding such protections as best effort, but I think we need to be really careful about what we promise or imply about that. It would be a high bar to go from untrusted execution and output into the trusting environment of rustc and LLVM.

1 Like

So after I wrote the below I realised it is getting way off topic, but it is still things I want answers to, so if you are not interested, feel free to skip.

That sounds interesting. Is that something that has an RFC? Or just vaguely being discussed? It is hard (impossible for me) to keep up with everything going on in Rust nightly, especially since I stick to stable anyway.

Of the top of my head I can already think of many issues, including that std relies on nightly things in stable rust. And it also relies on undocumented behaviour. That would be difficult to continue to to.

But it would be good if std didn't have to do those things, as it would open up for other crates to do things that only std can do currently.

Then should alloc also be made independent? It would be great for Rust in the Linux kernel if it was, they already fork it.

Core can probably never be independent, but that seems OK.

Another interesting consequence would be that it would make it easier to replace std with an alternative if you wanted. Maybe I want a rustix or otherwise non-libc based std replacement. Would need some cargo feature to say "replace crate x by, transitivly" which would be useful anyway for other reasons.

1 Like

The main reason for offering LTS is so that people can get some especially important (supposedly) bug fixes while not having to do a proper update. But with Rust, this just doesn't apply due to the stability policy, and for the breaking changes we do make, it's often exactly in important bug fixes!

I don't think LTS is the solution the the uncertain MSRV problem. I don't have one either, but LTS does seem like a pretty weird solution that just adds a bunch of essentially useless work to the already overworked project.

If a vendor wants to support old versions and can't update, they can try to spend the time for backports themselves.

8 Likes

I think these are a great argument for someone like Ferrous to offer a paid LTS that they maintain, just like Ferrocene is a paid product to cover the extra work.

I think "we expect you to move reasonably-promptly to stable, and if you can't do that for some reason you pay to support the extra work of backports" is an entirely reasonable position for rust-the-open-source-project. We shouldn't expect unpaid volunteers to do work that mostly helps lumbering enterprises.


Really, though, I look forward to MSRV-aware resolving, so that the majority case of "I'm not upgraded just yet so don't want cargo update to break me" will be covered.

Let's see how that impacts things before going any further.

27 Likes

Agreed. Sandboxing of proc macros is useful for other reasons, such as reliably determining what the macro depends on for accurate rebuilds. But it isn't a security boundary, not least of which because it can generate code that will become part of the binary.

3 Likes