Rustup makes little sense

Maybe I am not really authorized to use the internals forum but somehow it can be good to hear a opinion from a completely newcomer to Rust about the setup process. (=someone who never cared about rust at all)

First of all there is too many to read only to understand the ways of installing rust. You get literarily linked to 4 different docs/sites.

https://rust-lang.org/learn/get-started/
https://rust-lang.org/tools/install/
https://rust-lang.github.io/rustup/installation/index.html
https://forge.rust-lang.org/infra/other-installation-methods.html

It makes you think: "holy shit another overcomplicated dev environment to set up" because the official way even wants you to use an extra tool or install script for it. Only after you read the whole rustup documentation and you look what you get with it, you realize how how simple it really is.

Actually rustup is the most useless tool.
Why we have package managers which all those features if every new project comes up with a fancy new install way to to reinvent the wheel and brick standards? We hate commercial Monopols for such stuff but doing the same mistakes. Imagine every little tool comes up with its own little trash install tool. We would need a full script only to run every update command for every tool... make completely no sense.

Most people would hate me now but even the package manager from Microsoft on windows can handle this process much better and cleaner. I can simple "install" (nothing more than a zip download behind) the last Rust toolset with winget install Rustlang.Rust.GNU (different package options) that automatically sets up the PATH and gets updated with winget. Or if I need need a stable version winget install Rustlang.Rust.GNU --version X.

So please overthink your "rustup" tool. Nearly everything is possible with the common package managers.

It's not. My work project is pinned to a specific toolchain version, my own work needs latest nightly. That's already two versions not a single Linux distro I know packages.

8 Likes

yes but you could simple automate this process of publishing rust for the 4-6 common package managers. I cant find a single command in the rustup documentation that cant be a package manager thing. Different versions = no problem, different targets = no problem (multiple packages or sub-packages), etc.

Most package manager repositories don’t allow/handle a single project publishing >36000 packages (nightly release X 10+ years X 10+ components) or even just 1000 (nearly 100 stable releases X 10+ components). The only one I know of that actually supports most of what rustup does is nix with the third-party overlay https://github.com/oxalica/rust-overlay, derived from rustup’s manifests, only really feasible because it uses an actual lazy programming language to define the packages on-demand. And even then it’s limited to the last two years of nightlies.

11 Likes

And then you have Linux distros like Deban that will refuse to package anything recent and instead distrobute ancient versions of Rust that are not usable for development.

12 Likes

Speak for yourself; the ecosystem doesn't make it easy, but I try to write code that will work with any Rust compiler that supports my target edition, which will continue to be 2021 until I see a really compelling reason to change that. For reference, that's MSRV 1.56.

Actually I think you're on to something. For a long time I've been talking about what I call the Highlander Principle of Package Management: There should be only one package manager on any given computer, and it should manage all the software installed on that computer.

This is almost impossible to achieve nowadays, and even in an ideal world it would be in tension with a bunch of other common requirements, but it's still, I think, something to strive for. You've hit on one of the big reasons for it: user experience is vastly improved if you only have to learn one set of package management tools.

The "store" OSes - Nix and Guix - come closest to this ideal as a side effect of their build reproducibility goals. You might wanna give one of them a try.


Trying to bring this back on-topic for IRLO, I think it would be useful for us to think hard about why Linux distributors think our release cycle is way too fast, why we think we need to bypass system package management, and why it's such a pain to write code with a low MSRV and minimal version constraints on the dependencies. All these things are connected, and there are language changes that could improve matters.

1 Like

Let chains was a killer feature for me. Made the code much more readable than 3 or 4 nested levels of if let.

The Linux kernel is on ~9 week release cycle, which is only 50 % more than Rust. And yet they don't complain there. And I don't think distributors think this, maybe apert from Debian. I haven't seen complaints from other distros that move at a more reasonable pace such as Fedora or Arch, but if you have actual citations for this, by all means provide them.

LTS is a sham, as has been pointed out multiple times on this forum. It doesn't create less buggy software. On my laptops I have used Ubuntu, Debian and Arch Linux over the years (still have to use Ubuntu LTS for work). The only times perfectly working suspend/resume happens is when I use Arch (a rolling release distro). The only time hibernate works? Arch of course. And docking stations and external monitors? You guessed it, unless I run Arch I get random issues such as having to connect two or three times for the output to be recognised.

LTS distros aren't less buggy. But maybe you want to have the same old bugs you know and that is what you mean by stable? Odd definition, but sure. But they don't provide that either. On my work laptop I have to boot an old kernel or I don't see the HDMI output at all (which I need, one of my external monitors is HDMI only). Ubuntu screwd up their backporting. And reporting the bug is like shouting into the void. Of course if I boot that same laptop with the latest Arch everything just works.

So what is left in favour of LTS then? Nothing. And thus there is no reason to have to bend over backwards for that niche use case.

There is also another argument for recent MSRV: shouldn't you keep using old crates with your old compiler? Or upgrade both. Who said it was reasonable to expect to eat your cake and have it too? And now we have had an MSRV aware resolver for long enough that that argument is no longer valid.

9 Likes

They are nice, but my threshold for "sufficiently compelling feature to warrant raising the required version of the language" is quite a bit higher than just syntactic convenience. It'd have to make possible something completely new. Trait specialization, for instance, that might be enough. Or giving binary crates the ability to override the orphan rule.

LTS is a sham, as has been pointed out multiple times on this forum. It doesn't create less buggy software.

"A six-week release cycle is too fast" is not about bugginess for me. It's about cognitive churn. With my application programmer hat on, I simply do not want to have to update my understanding of what Rust is more often than once every edition -- and even the edition cycle is a bit too fast for my taste. It might help you understand where I'm coming from if I say that in my opinion (a) the strongest remaining reason to keep using plain old C is the ten-year release cycle for the standard, and (b) on balance, C2023 shouldn't have happened, the changes weren't worth the churn.

You may say "nothing stops you from continuing to write old-style Rust with a newer compiler" but that is simply not true. There are enough minor compatibility issues between the old compilers and the new that I wind up having to know about all the changes in each rustc since my MSRV, so I don't actually get to be like "yeah I'll maybe think about reading up on the new features when the new edition comes out". I have to pay attention to the updates continuously and that takes away from the time I have to spend working on my projects!

Similarly for people who aren't programmers. Look at Windows. People stuck with Windows XP for decades longer than Microsoft wanted, and then they did it again with Windows 7 and again with Windows 10. Not because they cared about the bugs, but because they were accustomed to how their computer worked and they didn't want that to change. To first order it was entirely about the user interface. Microsoft could probably have gotten significantly faster uptake of 7 in particular if they had made it "look and feel" exactly like Windows XP.

Yes! That's another thing. Rust is currently missing a bunch of features that would facilitate writing code that works with both old and new versions of crates and/or the stdlib. #cfg expressions for whether or not a path exists, or whether or not a trait provides a method. Mechanisms for "polyfilling" -- supplying your own implementation of a trait method if it isn't there yet. Etc.

I think one thing to keep in mind is that not everyone writes the same kind of code as others, so anecdotes really don't go far here.

A key difference here is that there are alternate release series to follow rather than "the latest", so I don't think this is really applicable. Firefox or Chrome would be more analogous, but even Firefox has an LTS series (and Chrome gets a lot of flak from package maintainers).

I don't have those kinds of patterns in my code; I've yet to use let chains at all.

Not everyone is running Linux on laptops where the latest typically is best. It's more about "we vetted this for our use case and redeployment means X pages of paperwork and Y weeks of calendar time to manage"

In CI, I test mindeps by using nightly to generate a -Zminimal-versions lockfile and then build with the intended version. cargo audit on this lockfile does tend to creep the version requirement up though. I'm not yet using MSRV-based resolutions because it has not yet been populated on all of the old versions that are supported. Said another way: MSRV can help, but only once your minimum versions are at least that which provides MSRV support and metadata in the first place.

1 Like

So that also applies to the 9 week cycle of the Linux kernel then? Or where is the limit exactly?

I disagree that this is a problem, I don't like to wait to have nice things that are ready. It also reduces the stress of contributors. Fail to get something into C++26? You have you wait 3 years. Fail to get something into Rust 1.85, well you can have another go in just 6 weeks. Rust should have yearly (but smaller) editions instead, that would reduce the stress for contributors there.

Reading a typical release blog post of a rust release takes 2-5 minutes, once every 6 week (typically on the lower end). The editions take a bit more, sure. But that is just every 3 years.

Vista was slow on contemporary hardware, and 7 was quite close to a polished Vista but the hardware had caught up. That is why Vista failed. 7 was a success, but cost money, and most didn't want to pay that (unless they were getting it anyway with a new computer). 8 was a UI disaster (and cost money) and 10 started showing ads and spying on you (11 made that worse). 10 and 11 were free to upgrade, which helped a bit (but again old hardware and ads...). So I disagree with your diagnosis.

Lock files makes this an non-issue. You write your application for the old version of the crates you depend on, and are unaffected by that they have moved on. If you are not happy with those versions, upgrade the whole thing. As I said: you can't eat your cake and have it too.

1 Like

In my experience it is far easier to stay on top with small releases often than upgrading every few years. Perhaps you don't need to go to rolling release, but upgrading twice per year should be reasonable.

Do you mean in the MSRV resolver in cargo? That was stabilised in 1.84 which is almost exactly a year ago (minus a few days). In crates the rust-version field has been around a lot longer. Even Debian stable is at 1.85 at this point.

2 Likes

My recommendation if you want to deploy for legacy Linux systems is to build a static binary using the musl target in CI and just deploy the binary to whereever. This is what I do for my open source projects as well, that way anyone can run the binary if they don't want to built it themselves.

2 Likes

It applies to everything, but only to the extent that updating might cause me to have to drop what I wanted to do with the computer and instead learn about something that changed.

The Linux kernel, seen from the outside, is generally quite reliable about not breaking things that used to work. I can almost always count on slotting in a new kernel with no human-visible effect. If there are new features exposed to user space, I can learn about them when I choose to.

Another good example is Emacs. Each new version of Emacs has tons of new features and under-the-hood changes, but I can generally count on an updated not breaking my init file or shoving anything in my face when I wanted to get some work done. I can learn about the new features on my schedule, if I care.

Web browsers are an interesting middle ground. I use Firefox ESR, not Firefox rapid release, specifically because I want the UI to change as infrequently as possible. If there was a Firefox "web platform only" rapid release, with a pledge that UI changes would happen very rarely and would always default off, I'd probably use that instead.

Well, that's a pretty fundamental disagreement we have here. I'm not sure this conversation can go any further as long as you aren't willing to consider that you might be wrong, which is the impression I'm getting from the rest of your message. For example...

Reading a typical release blog post of a rust release takes 2-5 minutes, once every 6 week (typically on the lower end)

That's nice for you but I need you to acknowledge that you are an outlier here. I can read it that fast, sure. I can't absorb the implications of the changes and adapt my code to them anywhere near as fast. My experience has been that even when I update the toolchain as quickly as possible, it takes me at least a day to validate each of my projects against the new toolchain. That can easily consume all the time I have for non-paid programming work[1] for an entire week, or more - and that's a hefty chunk of the release cycle!

And when something does break, it can be extremely demoralizing. Consider the issue I reported here: No longer possible to exercise system call failure due to EBADF. A minor change, in the grand scheme of things. Abstractly, a good change. Yet it completely destroyed my motivation to work on that project. I haven't touched it since.

I respect the issue of contributor stress and frustration from delays shipping new features, but you have to weigh that against the much larger population of, er, end developers that a rapid release cycle pushes these costs onto.

You must be talking to a completely different population than I am. The only things I have ever heard from people about why they don't want to move to a newer version of Windows are variations on these three themes:

  • I tried it and it broke application XYZ
  • I tried it in the store but they changed everything around for no reason
  • I'd have to buy a new computer and the one I have works fine

For Vista I recall this being about a 60/30/10 ratio, but Vista was 20 years ago so don't quote me on that. Going from 10 to 11, it's more like a 75/25 split of "I read they added a bunch of stupid shit I don't want" and "I'd have to buy a new computer" and I have the impression that the "have to buy a new computer" part is >90% about the TPM requirement.

I really think you are grossly underestimating most people's -- including most developers' -- aversion to change.

Lock files should not exist (as part of the source code). They are a misfeature. One of the reasons they are a misfeature is that they give you, specifically, an excuse to say that these missing language features are unnecessary. :stuck_out_tongue:


  1. as of this writing, nobody is paying me to work on anything in or related to Rust ↩︎

This is true of rustc too. You don't have to use the new features, and old code rarely breaks. You might need to add an explicit type sometimes if inference becomes ambiguous, but that can happen from updating normal dependencies too.

Hm, my end to end tests take far less than that to run, and if you don't have good automated end to end tests, changing anything is a game of chance.

That is an interesting case! I would argue that constructing a File pointing to an invalid FD broke the invariant in std before though, it just wasn't detected. I'm not sure if that was documented though. So a case of Hyrum's Law if so. It sucks to be on the receiving end of it, but not being able to ever change upstream also sucks. So a balance must be struck. This is one reason for crater in Rust, and had your code been public this would have been found. (There is obviously no way crater can see non-public code unfortunately.)

Probably! But who knows what is representative. The "can't upgrade, breaks my workflow" seems like a minority to me (old CNC machine controllers for example in industry) . Most people I know who aren't computer savvy wouldn't even know about the upgrade existing until they need to buy a new computer. And for work the upgrade schedule is handled by the IT department.

They are necessary for reproducible builds. And having them checked in makes going back and testing old versions (for git bisect for example) reproducible. So they absolutely need to exist.

5 Likes

I think this a great example of churn not being universally bad. I can understand the frustration when this happens from the perspective of a developer, but from the perspective of an end-user I am more trusting of software and dependencies if I know that technical debt is not accruing.

I'm not sure what the ideal tradeoff is between churn-for-me and churn-for-thee, and Rust may not be pareto optimal here, but the decision to use a 6-week release cadence and 3-year edition cadence was not made in a vacuum. It was a consensus between prominent early developers in an industry setting.

3 Likes

I think the Rust project should be publishing official packages for major Linux distros, simply because that's what users expect. curl | sh has an awful reputation, and Rust can't change that (after a decade of trying, it's still contentious).

However, rustup is fine. It's helpful to have a way to install additional targets, components, and updates in the same way across operating systems, including Windows and macOS, which don't have first-party package managers.

4 Likes

If I’m reading this page correctly, then rustup is already packaged for many popular Linux distros. Maybe the main thing is just changing the docs to recommend getting rustup via your package manager if possible?

7 Likes

The HPC systems I'm familiar with cost multiple-digit millions of dollars, have full-time staff to keep them running, and use expensive suites (in power consumption) to verify that they're working properly. Shutting them down every 6 months for upgrades (which is also doing any upgrade testing "in production" because you don't have two of them) is not worth it because you also have to schedule it against tasks that take days-to-weeks to run meaning you're actually poisoning utilization for a significant time window prior to it as well. They pay $$$ to vendors to make sure that updates are as incremental as possible with SLA terms to make sure technicians are on-site (potentially with security clearances!) pronto if things do go sideways.

You really try not to rock the boat more than necessary in such situations and "riding the upstream wave" is way too risky.

But I totally get it for personal machines; I ran Fedora Rawhide as my main machine for years.

I would say the same applies to Rust. It is very rare that existing code breaks after updating Rust, so you don't need to learn or do anything on a toolchain update. You can start using the new features at your own pace.

3 Likes

Speak for yourself

I am speaking for three broader ecosystem. If you write code that's compatible with these old compilers you're the exception. The latest version of many popular crates does not support them.

4 Likes