Rust's complexity + Rust's stability policy =?

I would like to express my concerns that Rust's stability policy regarding the standard library might have a really bad long-term impact. Why is that?

  • Even if the standard library is quite small, Rust itself is a complex language and the standard library is also pretty complex.
  • ⟹ Errors will happen.
  • ⟹ Errors will not easily or cannot be fixed.
  • ⟹ Rust (that is Rust + std) is doomed to stay flawed in the mid or long term.

This happened previously to Haskell (see Functor Applicative Monad proposal) and it's also an issue in Unicode (see NormalizationCorrections).

I (personally) don't like to use a flawed language, even if some people may see stability as the (from a financial/economical p.o.v.?) more important goal.

To name a particular issue regarding Rust, let's look at #98905.

We have an implementation in std that is:

impl<T: ?Sized + ToOwned> AsRef<T> for Cow<'_, T> {
    fn as_ref(&self) -> &T {

This seems natural, handy, and nobody would suspect an error here. But unfortunately this is (arguably) semantically wrong (as we also do not have impl<T: ?Sized> AsRef<T> for T or impl<T: ?Sized> AsRef<T> for &T). Moreover, it prohibits adding:

impl<T, U> AsRef<U> for Cow<'_, T>
    T: ?Sized + ToOwned + AsRef<U>,
    U: ?Sized,
    fn as_ref(&self) -> &U {

(edit: fixed proposed implementation)

Which would be akin to:

// As lifts over &
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_const_unstable(feature = "const_convert", issue = "88674")]
impl<T: ?Sized, U: ?Sized> const AsRef<U> for &T
    T: ~const AsRef<U>,
    fn as_ref(&self) -> &U {
        <T as AsRef<U>>::as_ref(*self)

But we cannot add impl<T: ?Sized + ToOwned + AsRef<U>, U: ?Sized> AsRef<U> for Cow<'_, T> (which might be already breaking by itself) without removing impl<T: ?Sized + ToOwned> AsRef<T> for Cow<'_, T> (which is definitely breaking).

I would like to bring up this issue just as an example. Maybe this particular example can be solved somehow (e.g. with specialization or other future extensions of the language). But the underlying problem persists:

Rust's complexity + Rust's stability policy = ?

What if the fear of migration is a Schreckgespenst that might lead to more collateral damage in the end than if errors in std would just be fixed? I guess I have too little experience to judge about it, but I love Rust enough :heart: to hope for a language that isn't flawed in the end.

Edit: I would like to apologize for my wording here. It maybe was unnecessarily provoking (deliberately taking a strong counterposition to what I often heard when I mentioned flaws in std), and it could have been expressed more factually. While I still believe there is a structural problem, I did not want to imply that backwards compatibility has no or little value and should just be given up, or that people with a different stance wouldn't love Rust.

Note that for breaking changes in the language itself, we have Editions, but apparently this doesn't help us with std easily.


We've been working to find ways to apply Rust's edition mechanism to std as well, for exactly this reason. There are things we'd like to fix, and while we have to preserve compatibility with old versions, we could deprecate functions, add new functions, and (in an edition) make deprecated functions unavailable, give new functions better names, and give functions more general types.


The important point to consider is how bad is the problem. One option we have is to just not fix minor issues, because their total impact in the ecosystem is small.

There is a bunch of warts in the std (binary heap being max, ranges not being Copy, former AsciiExt methods taking a needless &, poisoning/unwind safety story, retain taking &T, Context being Send, etc). The vast majority of those strike me as inconsequential in terms of an actual impact.


But this is a function of time. The total damage caused by a bug is (arguably) an integral over time. Bugs that are never fixed and have a non-zero impact per time (formally) have a damage of ∞ (for time → ∞).[1]

Of course, that's just a model and I admit it's not reflecting reality. Rust won't live forever, and we might find ways to fix unfixable issues in future (e.g. by extending the Edition concept to std), but my point is: When judging "how bad is the problem", we should also consider how long will this problem have a bad impact on people and affect people's efficiency.

  1. Not always true if the impact per time isn't constant, as for example the Integral 1/(x+1)^(1+1/c) dx converges to c if the real part of c is positive. But that's not the point here. :sweat_smile: ↩︎

That is irrelevant. Backwards compatibility issues can kill the ecosystem. If I come back to the project in 3 years and nothing builds due to libstd breakage, I'll very likely just move on to some more stable language, rather than churn the fixes that will need to be reapplied in another 3 years.

Python lost a lot of mindshare to Go during its 2 to 3 transition fiasco. A major reason why more people don't use Haskell is its frequent backwards compatibility breakages, both at the language, prelude and package level. On the other hand, Java, Javascript and C++ are still going strong, despite a baggage of past mistakes. C++, in particular, is riddled with old bad decisions, but sticks with them despite all the pain.

A major selling point for Rust is that I can confidently say that whatever built on 1.0, will still build today (with a few quite insignificant exceptions).


I feel #98905 is a molehill into mountain topic, and hesitated to further that by posting this, but the topic was presented as concerns over the wider issue of stability, which I do care about quite a bit.

So even though it was brought up as an example, I would love to hear your take as a library team member on proposals like that in #98905 more generally. Namely,

  • Is it the case that an implementation of a trait will never be removed from std?
  • If not,
    • What is the bar for making such a breaking change?
    • How does that gel with the guarantee of RFC 3085 to not split the ecosystem, and with Rust's backwards compatibility guarantees more generally?

I wish I didn't even have to ask this, but I've seen too many instances of [1] breaking changes [2] and RFC reversals at this point not to. I don't think the example in particular is likely to go anywhere [3], but would love some more clarity on what direction Rust is heading with regards to stability more generally.

As is readily apparent from the above, I am one of those people who values backwards compatibility -- or more generally stability -- more than achieving perfection in the language. It is true that every break and churn presents a new challenge when advocating for Rust in a professional (practical) setting. However, economics is not why I value stability so highly. I feel the way I do because I too :heart: Rust and want it to continue on its path of success; to not only be a language I'm still using decades from now, but to be a common one which has supplanted others, and is considered a good and reliable choice. And in my opinion, stability is a vital quality necessary for this to happen, not a boogyman.

Moreover, I don't believe in an unflawed language (or standard library) [4] any more than I believe in complete and consistent mathematical system [5]. We will never reach perfection. [6] I do not like the existing warts [7], but I will take them over breakage. There are even "features" and warts that bug me enough I sometimes daydream of a Rust 2.0 [8], but as I cannot imagine Rust surviving such a split today any more than it could survive instability, a dream is all it is. My desire for Rust's success far outweighs the annoyances, and thus we live with the warts. [9] I would rather the minor problems, and Rust itself, live forever [10] than for both to go away [11].

Being unable to rely on Rust being stable was also a problem -- one that was supposed to end with 1.0.0.

  1. or advocates for ↩︎

  2. as per RFC 1105 ↩︎

  3. or at least nowhere as extreme as proposed so far ↩︎

  4. that is also powerful enough to be useful ↩︎

  5. that can perform arithmetic or more ↩︎

  6. Higher-ranked inference is apparently undecidable, as an example for those who prefer concrete reasoning. ↩︎

  7. and have my own laundry-list of things I wish were different ↩︎

  8. or time-travel fixing them ↩︎

  9. And thus it's vital to advocate for the language you want at the RFC, FCP, and similar pre-stabilization stages. ↩︎

  10. from my mortal point of view ↩︎

  11. or into obscurity ↩︎


I've said it before and I'll say it again: I do think that at a certain point, we've just got to become okay with and learn to live with some of the warts we have. Some warts may be fixable, but others aren't. And sometimes, even if we can fix a wart, it doesn't necessarily mean we should just based on churn alone.


How's this for an estimate:

(Wild guesswork, really, but at least this gives us a framework we can talk about:)

Rust is going the last 40 years; after that, a new language will take over, a language which has learned the lessons of Rust and is much better in ways that Rust cannot fix in itself, much like how C++ cannot become Rust.

During those forty years, the annoying gotchas in Rust will cause 10 units of trouble for project per year. Meanwhile, the annoying gotchas in C++ will cause 1000 units of trouble per project per year.

Fixing all of the annoying gotchas in Rust, via breaking changes, would cause 50 units of trouble per project immediately, giving the fixes a theoretical ROI of 5 years. However, because the trouble is frontloaded, it would also discourage people from adopting Rust in the short term, which slows down Rust's adoption, resulting in a 20% reduction of the total number of projects that will be written in Rust rather than C++ over Rust's 40-year lifespan, which comes out to around 200 units of trouble per potential Rust project per year, much greater than the amount we save by fixing the gotchas.

On the other hand, if we find ways to fix them that minimize the immediate trouble – such as being able to apply editions to std – then the balance changes and the fixes would increase rather than reduce adoption.


Admittingly, I don't see the issue entirely from an economic perspective.

I would like to add that it wasn't long ago when I was criticized for wrongly using Deref instead of Borrow. Maybe this is a "molehill into mountain topic", but I would like to see errors at least being acknowledged or communicated such that learners don't trip over the flaws of std, even if they are never going to be fixed.

Maybe a "std errata document" could be helpful? Or maybe it already exists?

1 Like

Don't forget that this new language will make mistakes of its own as well. I don't think there will ever be a successful language without some warts from its past. The alternative is like a deflationary currency: it's always better to wait to use it because it'll be better tomorrow. You have to decide to ship at some point.

FWIW, even with how stable Rust is, there are still folks decrying how often things break and you can't trust updates (granted, this is because of crates breaking their API, but I don't think anyone will ever get an ecosystem to have any long-term stability).


There must always be a tradeoff between publishing a breaking change of an API and keeping using an old API for the sake of interoperability.

However, I see a structural problem here because of the combination of:

  • the goal of Rust's stability (which is important and legit),
  • (allegedly? yet?) missing concepts for migration,
  • the long lifecycle of a programming language, and
  • the complexity of Rust.

I believe that this structural problem should at least be recognized as such. Not to say that I have a solution; but it is not "irrelevant".

Another aspect that should not be ignored is that the success of the language does not only depend on the stability and usability of the ecosystem but also (to some degree) on

  • consistency,
  • ease of learning, and
  • elegance.

Introducing non-fixable flaws in std will affect these issues as well. While an ecosystem may (arguably) be repaired, a promise to never fix errors will (by definition) lead to issues that last forever (or as long as Rust exists).

It may be short-sighted to only look at reusability of old code (which, depending on the code, may be a bad idea anyway), while there are other aspects that might have a long(er) term impact.

P.S.: For the record: I do not propose to fix #98905 by removing or changing trait implementations at this stage. [1] All I want(ed) to do is raising some questions and bringing to attention a structural problem.

  1. Though my bug report explains how the problem "should" be solved (ideally). But I'm well aware that std likely won't be fixed; or if it is fixed, that it will be a very difficult process instead of a simple PR. ↩︎

In the short term those reasons mostly don't apply, Rust is in a good place with regards to all of them, and there are not that many ingrained pains to deal with.

In the long term, those are entirely irrelevant. There will be new hot 'n tasty languages competing for the same space regardless of how good Rust does its job. People will get tired of the same old gnarly codebases and want to move to some greener pastures (and old codebases are always full of warts, for the same reasons that Rust will accumulate issues, except most code doesn't have the quality control and engineering to mitigate those problems). The actual long-term use of the language will be decided by being entrenched in the industry. Old huge codebases won't move to the new language, it's simply economically infeasible. New projects will still have to interoperate with old projects, so they will be limited in their language choices, and the engineers will still have to learn the old languages.

Elegance? In the large, elegance is irrelevant. Every feature will be misused, every footgun will be fired in creative ways. Elegance is nice to have as a quality of life feature for the practitioners, and and helps to drive early adoption, but it is very very far from being important to the language's success, as Go will tell you.

Ease of learning? People will learn it if it pays their bills. People learn C++, and Javascript, and Solidity, and Verilog, with variable success. Ease of learning is important to keep in mind during development, but it will certainly decrease long-term, and it's not the thing that will keep the language alive. It also won't matter how easy the language is to learn if it's considered a dead-end in the industry. COBOL and FORTRAN are both very simple languages, with a very thorny ecosystem surrounding them.

1 Like

Does anyone acknowledge there is a problem?

With my library team hat on: I think it's critically important to avoid breaking changes.

Without commenting on the value of that particular change, in general I do think it's reasonable to migrate from an old interface to a new interface in an edition. But only in an edition; existing code for existing editions must continue to work.

That's leaving aside "if a tree falls in the forest and nobody is around" changes, where we use crater and similar to verify that we don't cause any actual breakage in the ecosystem, but even then we need to be cautious and not assume crater catches everything; it's just a useful canary. But these days, frankly, we are careful even making changes that we are allowed to make (e.g. new lints, or deprecations), if those changes would produce huge amounts of noise in the ecosystem.


Of course. I think everyone does? The issue is not that backcompat is a problem, the issue is that it's a trade off. That trade off is informed by many things, but one of them is undoubtedly a values thing. The Rust Project as a whole cares a lot about not breaking your code. It cares enough that it will give up some things in order to achieve that objective. But we do not care about it so much that we make literally zero breaking changes. It's a little dated at this point, but this is why the API Evolution RFC was written. It was written because almost any change you can think of can be cast as a breaking change in some situation or another. So that RFC was written to carve out which subset of breaking changes we empower ourselves to do.


This is a generally unsolved problem. There's also a tension the other way: a lot of people would like to have a larger std with more features, which means having even bigger API surface that may end up with mistakes or bitrot.

I hope that eventually std could become a versioned crate. Perhaps even broken down into smaller crates for fs, collections, etc.


And because of stability guarantees, features cannot (and should not) be incorporated to std, given the current way to deal with things (whether it's good or bad).

Not sure if that's the best approach, but it is at least an idea for solving the structural problem. Not sure how feasible it will be when looking at details.


I certainly believe there are times where this would be a permissible and correct decision, but I think its very important to recognize that the edition mechanism does not give carte blanche to do a forced idiom migration. These can have real costs for users, even if the code can be mechanically upgraded.

There are several issues: is it likely that the mechanically upgraded code needs manual rewriting to be considered readable and maintainable? is this idiom shift large enough that users using the deprecated idiom will have to reconsider how they currently write their code? is the benefit that people get from switching away from the deprecated idiom tangible, or is this preference chiefly aesthetic or normative?

These and probably other considerations influence whether or not users will find it to their benefit to switch away from an idiom they're currently using, and you can safely assume that there are users currently using every idiom you might want to get rid of. If they don't find it to their benefit, they will chafe at being forced to switch in order to stay on the most recent edition.

If upgrading editions is too costly for users for too little benefit, they won't do it. This will create Rust dialects with some actively maintained codebases sticking to old editions of Rust out of preference. This will also damage the trust between users and maintainers, as users in this situation may feel the project maintainers are not acting in their best interest.

Again, this isn't to say that it can't be correct to eliminate a disfavored API or syntax at an edition boundary, but that it is not zero cost. Code on old editions not changing and mechanical upgrades are necessary but not sufficient requirements.


Regarding the example case I brought up, users could, for example, decide which revision of Cow (or which version of std::borrow) they want to use. But this requires using generics where we currently simply write Cow<'_, str>, Vec<i32> or the like.

I'm not sure if the current syntax (and visibility rules) are really suitable enough to deal with such an (perhaps?) structurally superior approach. Code readability might suffer too (e.g. "what did .as_ref() do here again? :face_with_monocle:").

Another issue may be compile time and binary size (monomorphization/polymorphization).