Just come across the Russ Cox's article in great details about the rationale on creating go modules for golang. It describes many scenarios why the former "de-facto" package manager Dep does not work well. Just wonder if cargo will face the same issue as Dep, per quoted:
I’ve been using Dep in these examples because it is the immediate predecessor of Go modules, but I don’t mean to single out Dep. In this respect, it works the same way as nearly every other package manager in every other language. They all have this problem.
The answer is that we learned from Dep that the general Bundler/Cargo/Dep approach includes some decisions that make software engineering more complex and more challenging.
The developers of Dep (and Bundler and Cargo and ...)
The solution offered by package managers like Bundler, Cargo, and Dep...
Note that both the scheme in this article and Cargo are addressing the same problem. The article is simply arguing that their solution is better. However it doesn't remove the problem, it shifts the problem around. My first impression is that it makes the life of the package manager easier but it mostly shifts the problem on to module (aka library) authors.
Perhaps this is better, perhaps it's not. Either way it doesn't eliminate the problem. Somebody still has to deal with it.
Part of what I do professionally is a lot of builds of both Go and Rust applications. So I was quite surprised to see Russ suggest the following:
Software engineering is what happens to programming
when you add time and other programmers.
Programming means getting a program working. You have a problem to solve, you write some Go code, you run it, you get your answer, you’re done. That’s programming, and that’s difficult enough by itself.
But what if that code has to keep working, day after day? What if five other programmers need to work on the code too? What if the code must adapt gracefully as requirements change?
While I consider Go modules a rather big improvement on earlier solutions (including dep, the older go dep, and for lack of a better term "nothing"), I think the following still hold true:
The recipe for a build host/container/image for Go is still much more complex than for Rust
Spanning the gap in capabilities between something simplistic like Go modules and what Cargo can provide usually still involves an additional step for large/complex Go programs, e.g. a Makefile and fetching prerequisites
Once configured, Go build recipes are much more likely to fail in the future
Cargo made a number of decisions correctly which make it much more convenient to use and ensure that once a build is working, it will not break in the future, and reproducing that build more or less from scratch is significantly easier, IMO.
Here's one such decision, quoting Russ:
As another example, Go import paths are URLs. If code imported "uuid" , you’d have to ask which uuid package. Searching for uuid on pkg.go.dev turns up dozens of packages with that name. If instead the code imports "github.com/google/uuid" , now it’s clear which package we mean. Using URLs avoids ambiguity and also reuses an existing mechanism for giving out names, making it simpler and easier to coordinate with other programmers.
...but this decision has the downside that if Google were ever to delete their UUID library, your code would break. This can't happen with Cargo: once published, packages on Cargo are effectively immutable.
I admire Russ's goal of not wasting other developer's time, but having empirically experienced quite a bit of what he was trying to avoid, and while I consider modules an improvement over what they had before, I still think they have a long way to go and have not fully evaluated the impact on wasted time/effort the current set of tradeoffs they're making have. Like many things in Go, they pride themselves on going in a different direction, but it isn't necessarily a better one.
Most of the time, at least for me, Cargo "just works", and Go modules are nowhere close to that level of user experience.
The only major difference to me is that Go picks minimal versions, and Cargo/npm/etc. pick maximal versions. I don't think either strategy is clearly better. They have their own trade-offs (needing locks to prevent unwanted minor updates vs needing to bump versions to get minor updates).
In Cargo's case minimal versions don't work for a rather banal reason: crates-io allowed * versions for a while, older versions of some popular crates use it, and end up depending on crates that are older than Rust 1.0.
Other than that I find that article weird. It talks a lot about breakage, diamond dependency problems and Go's solutions to these. It implies that these solutions are somehow unique and superior, but I don't see it. To me almost everything Go does is functionally similar to what other package managers do. Apart from min versions, Go's solutions don't seem very different, they're just the same thing adapted to Go's situation.
For example, Go proudly avoids semver and having incompatible versions of packages… by putting semver-major version in the import path. Instead of uuid v2.1.6 Go uses uuid/v2 1.6. Having both uuid and uuid/v2 in Go behaves the same as having uuid 1.x and uuid 2.x in Cargo.
I am also under the impression this is the only significant design difference.
On that subject, aturon had a great blog post last year about why there's no clearly decisive argument for choosing either approach, and why Cargo also opts for maximal versions:
I basically agree with kornel that every other alleged design difference in that post is either not an actual difference or not significant in practice. But I would like to add that many of the objections raised and rebutted in the post come off as strawmen, not representative of "nearly every other package manager". The rest of this is just me expanding on that point-by-point.
I do think the argument that major version numbers should go in the package name rather than the package version (what Russ's post calls "semantic import versioning") is actually quite strong, because a new major version is effectively a completely new package in some ways, and you shouldn't be changing major versions often. But that's a social engineering decision, not a technical design decision. Since Cargo and npm and vgo let you have multiple versions of a single package coexist in a single build's dependency tree, their behavior is pretty much identical in practice, and everyone seems to agree that robust, modern package managers simply need to support multiple versions of the same package in a build regardless. Russ' post talks about semver making it "too easy" to increment major versions, but this seems purely speculative, and certainly in practice it hasn't been an issue in the Rust ecosystem (if fact, we mostly see complaints about crates never publishing a v1!).
The import compatibility rule seems more situational to me. For a language as proudly simple as Go, that rule is almost obviously correct. For a language like Rust whose goals effectively demanded complex generics and trait systems, although it's obviously still not something you should do frivolously, in certain cases the benefits far outweigh the downsides and things like the dtolnay semver trick are strongly encouraged.
And now that I'm actively thinking about it again, it seems weird to me that Go opts for minimal version selection when Google has advocated for a "Live at Head" strategy to dependency management (which is basically taking "maximum version selection" to the point of maybe not even bothering with explicit version numbers at all), and even links to that exact cppcon talk in the vgo blog post arguing for minimum version selection.
The argument from lockfiles feels downright confused to me. Yes, minimum version selection gets you repeateable builds without lockfiles. Yes, lockfiles have non-zero complexity. But that doesn't mean the universality of lockfiles is a tacit admission that everyone really wanted minimum version selection all along. It seems far more natural to say that the universality of lockfiles shows that everyone wants both repeatable builds and up-to-date dependencies, but there's a fundamental tension between those two goals (as aturon's post goes into).
And finally the "cooperation" argument. I agree that in this hypothetical (and often very real) scenario, the authors of C and D simply need to work together to figure out the fix. I don't think anyone disagrees with that. But the post strongly implies that other package managers somehow make this scenario worse, without ever really spelling out why. It seems to be implying that minimal version selection prevents the incompatibility from affecting as many users, and therefore reduces pressure on C's and D's authors to rush a fix. While that certainly could happen, it's also entirely possible that C's and D's authors would never notice the incompatibility at all, or may dramatically underestimate its severity, if none of their users were regularly upgrading to maximal versions. Plus, if the package manager has a lockfile, all of the users that notice the incompatibility can simply keep using the lockfile until the issue is fixed. It's not a simple "vgo helps, other package managers hurt". It's a complicated tradeoff, which (once again) aturon's post covers much more fairly.
At the most basic level, a package management system is just a way to help people download the libraries they need to build their program. But a higher goal of many package managers, including cargo, is to help people receive compatible updates automatically, which can include important performance and security improvements, without breaking their build. This goal is inherently in tension - sometimes updates will break builds by mistake.
Cargo's attempt to find the highest point on the curve is maximal version selection with lockfiles, and by default not checking the lockfile into CI for libraries. That way libraries are rapidly tested for compatibility with new versions of their dependencies (so potential incompatibilities between libraries are found quickly) while end users only receive upgrades to their dependencies when they ask for them (so the built binaries they deploy don't break). This point may not actually be the maximum utility, but I think its a good effort.
In contrast, Go resolves the tension by declaring that getting people the most recent compatible versions of their dependencies possible is just not one of their goals. That's what the claim that "we all must work together" is all about: getting new updates is the users' problem.
This solution resolves the trade off completely by just choosing one side of it, but I don't think its the point of maximal utility for users. I also feel dubious of the way vague aphorisms are used to anchor the justifications of this position, like high-minded language is a cover for the reality that they just resolved the tension by reducing the scope of their goals in comparison to other package managers.