Aren't we conflating binary compatibility and source compatibility here?
most linux package management schemes are binary distributions, emacs is an interpreter, and the linux kernel holds itself to strict compatibility in it's system call format.
Rust has never guaranteed a stable ABI across compiler versions, so bringing rust to a linux binary package management scheme will require a recompile the world step for upgrading the compiler.
I don't feel like either emacs or the kernel at least suffer from the same binary compatibility problems. Though I don't know much about emacs, maybe people are distributing the bytecode formats? Anyhow I feel like we might not be making an exact comparison here...
Rust doesn't do dynamic linking with the Rust ABI (technically you can, but you need everything built from the exact same build directory then even), so that is not a real problem. Even Debian isn't crazy enough to try to dynamically link Rust code at a system level.
This, while it might be a feature people want, it is not a feature people use today, and as such it can't lead to compatibility issues.
hence the fact that rust packages stick out like a sore thumb? Linux packaging has been basically predicated on dynamic linking and binary distribution since the elf transition.
In practice all major distros already recompile all libraries when building a rust executable, so no updating rustc is no more of a recompile the world step that say updating the libc crate.
Haskell also works that way, so no. And with C++ it is a crapshoot, half the functions will be inlined in your application since they are templated, the other half won't. And many C++ libraries are header-only these days.
So no, Rust is not the exception. C is. (Scripting languages work differently, they basically "link" every file at runtime, so those aren't applicable here.)
I don't get where we disagree really... I'm not arguing other languages doesn't also suck with linux packaging, just that it is historically designed around C, and everything else requires an exercise in contortion to make fit.
I don't feel like "compatibility" in any way is why people use linux packaging schemes. It's because they don't want to support a build environment which is not the exact same as the one that they use themselves, and want to be able to replicate remote problems locally, while avoiding any need to locally replicate the remote build environment. That is largely a social problem, and not a technical one.
I really don't understand what your side of the argument is, are you saying that rust packaging on linux distributions is fine and doesn't need to change at all, while i'm saying to make any kind of remarkable improvement to it you need to follow C's rules.
Maybe what I should ask is what attainable measures would make any improvement on the situation, or do we just argue for the sake of it?
I'm most familar with Arch Linux packaging for Rust, having made AUR packages for several of my own Rust programs. From that point of view I would say it works fine. The packaging guidelines for Rust leans into using the upstream lockfile and pulling sources from crates.io. This makes the build reproducible (at least in theory) thanks to the lock file.
Debian's approach seems to be to create more work for themselves for no apparent gain, packaging rust library sources individually, and trying to only have one version of each library (so messing with the lockfiles).
You could reasonably argue that Debian's policy of mirroring all the source systems themselves wouldn't work with the Arch approach. However, it wouldn't be too hard to just mirror all the relevant sources similar to cargo vendor but still follow the lockfile and not packaging each library separately.
So yeah, I think the problem is wildly overblown. There are other better reasons why dynamic linking would be nice to have (such as faster incremental builds during thr development cycle, not having to drop to C API for plugins, etc).
There are also extremely good reasons for not having a stable ABI by default: it allows gradual improvements, both in the ABI itself but also in for libraries. Because of generics are instantiated where they are used, it is easy for a new version of a library to be API compatible but not ABI compatible. This has been a major issue in improving performance and functionality in the C++ standard library implementations. For example, this is why the std regex in C++ is a joke when it comes to performance in all common implementations: it can't be fixed without breaking ABI. And this is why the calling convention of unique_ptr is suboptimal.
On the whole I prefer not having a stable ABI and giving more freedom to improve rustc and std. And more freedom to library authors. Stable ABI as an opt-in (as a repr perhaps) for plugin APIs could be useful though, perhaps versioned so you could have the stable-ABI-2026 and then a new one if needed a couple of years later.
One difference is that you are presenting the idea that Rust "doesn't work like C" as "Rust sucks with Linux packaging". Another point of view is that expecting things to work like C is a bug in Linux packaging.
Fortunately, distributions have worked extensively to introduce the infrastructure to rebuild packages when their dependencies change, and that infrastructure works well for Rust.
While there are use cases for dynamic linking, and by all means we should continue working on stable ABIs for subsets of Rust for other reasons, I think for the purposes of Linux distributions the best solution is largely what's already in place.
Sorry if I'm generally not very diplomatic with my words.
My point is largely that (assuming we aren't just going to pull a stable ABI out of a hat) both of these situations are pretty much out of rusts control.
Fixing the linux packaging ecosystem to be better for non-C entails no changes to rust's packaging? So if rust isn't adopting C ABI semantics anytime soon, I don't see what changes are actually being proposed by all this disagreement.
What I'm searching for here is the what is the non-extreme measure (extreme measure being ABI stability), that improves the situation. I don't really which of the rust or linux packaging ecosystem is at fault if the end result is largely unchanged.
This is exactly kind of the point that I was trying to make, that one side of this argument is arguing for the status quo, and from my perspective the other improvement that could be taken by the rust project requires requires extreme changes like ABI stability.
From my perspective given the unlikelyness of ABI stability this is like arguing between doing nothing, and doing nothing. So I don't see why we should continue bleeding words over it. I hope that clarifies my perspective.
Edit:
Maybe a better way to explain my perspective is that, I see people saying that there is a problem, and others arguing for the status quo. I just wanted to ask "Assuming that they are right and there is a problem, what can we as a project actually do about it?", but as usual I didn't frame it very well.
You might still get a rather old version of rustup depending on your distro;
You must consider the difficulty of packaging rustup in addition to the complexity of rustup itself. This is demonstrated quite well in that page in that we don’t end up with a unified UX because different packagers have their own interpretation of good UX (the Debian one has skipped the installer completely along with its instructions and some have complained about poor user guidance in our issue tracker), and may even include problematic patches that emit error messages unknown to us (the NixOS build).
Either way, recommending the use of package managers will definitely fracture the ecosystem which is not what we’d want to see. The same is true for Rust itself and that’s why rustup exists, right?
Leaving the unusual patched versions aside, for versions that are reasonably normal, like the Debian one, it's really nice to have a root of trust through your distribution, and to not have an unusual installation procedure. Installing rustup via apt, and then installing rust via rustup, seems to Just Work every time I've tried it.
I think we should be cautious about recommending distributions, but still willing to do so as a recommended procedure for obtaining rustup. If there are distributions that are breaking rustup for users, we should try to get them to fix that.
NixOS has to patch rustup, because that then needs to patch rustc on install, CentOS-targeted binaries don't just happen to work on a NixOS system like they do on other distributions. That's one of the reasons that personally I don't recommend using rustup on NixOS, it has other ways to obtain the officially distributed channels that integrate the necessary patching in a better fashion (including with rustup-toolchain.toml support).
Sure, that's fine. I'm more saying that we could tell people on the most common distributions that they could apt install rustup or yum install rustup or similar.
Oh, that is nice, I didn't know it would work. I must admit that the official rust installation instructions (execute a script found on internet) were close to a deal breaker when I first tested rust a few years ago due to the security implications.
The difference is that when you install a package from the package manager of the OS, there are more security guarantees. It has been curated by the maintainers of the distribution, and is signed with the official package repository key.
When you download a script or a binary from internet, the risk of executing a malicious script is higher. When you already know about the rust ecosystem, you know which sources to trust (websites or GitHub repositories), but by definition when you want to try rust for the first time, you don't know. It takes some research to confirm that the script you are trying to install does not contain anything malicious.
So in the end, if there is a solution, even not perfect, by which it is possible to use a native package manager to keep rust up to date, it is good to know about it, and also it would be a good idea to promote this solution as a first-class way to install rust.
Leaving the unusual patched versions aside, for versions that are reasonably normal, like the Debian one, it's really nice to have a root of trust through your distribution
I agree with your "single root of trust" argument, as I also use brew-hosted rustup on my own machine every time for dogfooding the bleeding-edge builds.
Still, considering the rustup's current status as the "main gate", I'm voicing this opinion esp. for the newcomers who don't necessarily know how rustup is supposed to work and, why the version installed from a probably outdated apt repository is not working exactly as seen on the online docs (I've seen quite a few issues about that!).
To me, this distro-specific situation is a lot like the Java case on Arch Linux where you have a pretty good way to handle it (archlinux-java) but have to look for other solutions outside of it:
The /usr/lib/jvm/default and /usr/lib/jvm/default-runtime symbolic links should always be edited with archlinux-java [..]
For people like you and me who are already past this period and are aware of the possible nuances, then sure why not. Maybe we can put that alternative method in the sidenotes of the official website page or whatnot.
This is really a Debian (and other LTS distros) issue though. pacman -S rustup on Arch is rarely more than a week behind if even that, and that is in fact what I use for Rust on my own computers.
But we have already been through the failings of LTS distros in this thread. No need for yet another go-around.