Pre-RFC: Adjust default object bounds

I reluctantly feel like we can pull off this change as long as we’re proactive about communicating what’s happening.

If we do this we should tell people how to write code that works with both revisions, and ideally have the new compilers also tell people when they are writing incompatible code.

I’d also like to know what other potential breakage @nikomatsakis anticipates on the horizon so we might have some confidence that we won’t be doing another one soon.

1 Like

I don't have any plans for further "optional" breaking changes like this one. I have been working on possible approaches to correct some soundness issues (e.g., #25860, #24622) which will be breaking changes. These are approaching "proposal" state but I'm not yet ready to dive into the details.

I like this change in isolation, and I think the version-based opt in originally described in the language RFC would be the perfect way to handle it, which the added bonus of solving the deprecation-warning problem (only warn if it was deprecated in or before the specified version).

We could probably do it without that, assuming sufficient communication, but I think it would be less ideal.

EDIT: Also, is there enough time for this to be backported to 1.1? I’d be more comfortable with that than waiting until 1.2.


Here’s the crater results from @nikomatsakis’s rebased branch:

Suspiciously, there are no reported regressions. Note though that several of the 8 root regressions reported in the previous run were false positives.

If there’s no regressions then I’d say ship it.

I feel I need to reiterate that may not be representative for all rust code in the wild.

However, in this specific case, I think that any resulting breakage won’t be too bad if the error message is helpful and you folks communicate the change ‘loud’ enough.

I’m willing to consider this, but only because it satisfies our established criteria for acceptable breaking minor changes: “it must always be possible to locally fix the problem through some kind of disambiguation that could have been done in advance”. (

@brson, can you elaborate on how false positives are possible with Crater? Are false negatives just as possible?

Would this change be backported to 1.1? If we have to wait an additional six weeks to get this into 1.2 then more things could very well break before then.

1 Like

The more I think about this, the more I think we need to be exceedingly careful here. The manner in which we handle this will affect the perception of the language for a very long time. We need to have something more thoughtful than just pushing out an update where code can potentially be broken, even if we can’t find any broken code in the wild.

For the people in here talking about “opt-in” attributes for behavior, I’m curious what you’re specifically referring to. For me, given the low risk of breakage from this change, I’m thinking that we’d have more of an opt-out for this particular change, where adding the attribute will get you the old behavior back. It would be the height of silliness to force every Rust crate for the rest of eternity to include a boilerplate attribute up top just for this.

@bstrie If I’m not mistaken, “opt-in” is referring to using a target version attribute to specify which version the code is targeting (the discussion has some overlap to RFC PRs #1122 and #1147).

As I’ve stated before, I’d prefer us to handle this carefully (by effectively “deprecating” the old rule, adding a warning to advise implementors how to make their intent explicit, and then in version 1.3 (or even later) add the new elision rule. If we have a target version attribute, we could even add the new elision rule in advance (like python’s from __future__ import ...).

Let me try to lay out my thinking here for what the policy should be.

To start with, I still think we should make the change proposed in this RFC. I think this change falls into a category that I have been calling (in my head) course corrections. That is, this change is not really fixing a bug in the code: the code follows the RFC. It is not a soundness concern. It’s just that the RFC’s design seems suboptimal in retrospect, basically, and it would be great if we could fix it. In particular, it’d be great if we could fix before there is a lot of code “in the wild” that depends on the current setup.

Initially, I thought it would be best to address these kind of course corrections using a version-based opt-in (as RFC #1122 specified). I’ve changed my mind, and I’ll explain why later. I now think it would be best to say that course corrections are permitted in a minor release, but only if the effect is judged to be negligible. I am sure we will evolve better ways to estimate the impact of a change over time, but for the moment I would propose the following criteria:

  1. The change has minimal impact on
  2. Because doesn’t represent the entirety of Rust code, the feature must be newly stabilized in the previous release, so there is not a lot of (stable) code out there depending on the current behavior.
  3. There must be no other way to fix the problem.

I think that this proposal meets these criteria. In an ideal world, it would be the only case that will ever meet those criteria. This is because in the future I hope that we refine the process such that we catch problems like this during nightly or alpha builds. But I wouldn’t rule out that a comparable scenario will arise in the future (and our decision in that case will clearly be informed by the eventual fate of this RFC.)

Why not opt-in?

Some of you are probably wondering why I think it is better to make a breaking change here. After all, I initially advocated using a fine-grained opt-in mechanism. In that case, we can change the language, but existing code is unaffected. Huzzah, everybody wins! (Right?) But I am now growing suspicious that this “free lunch” is in fact something of a mirage. Even if no code in stops compiling, backwards incompatible changes still carry a cost, and that cost grows over time as the amount of Rust code grows:

  • When we authorize an opt-in change, even if all existing code continues to compile, it still causes bitrot in tutorials, stack overflow answers, etc. People’s memory of how the language works will also have become inaccurate, which can be confusing if you’re not following Rust developments closely.
  • It also means that people’s notion of how “Rust 1.x” behaves will be fragmented: 1.22 might be different from 1.21 in relatively minor ways, which seems confusing (it seems natural that 1.22 will have new APIs and new features, but small changes to otherwise stable designs seems less intuitive).
  • We are going to have a lot of releases. I predict that maintaining this version number on crates will be a constant annoyance. Quick: Do you want to tag your crate with Rust 1.16 or 1.17? If you choose wrong, somebody will probably complain to you because they are using Rust 1.16 and it works just fine, so you should lower your version number.
    • The caveat here is that there are good reasons to want to avoid newer APIs and maintain compatibility. I think we can handle a lot of that sort of thing via cargo and automated analysis.

Put another way, I am concerned that having the ability to “opt in” to course corrections is a problem because it encourages us to make those corrections on a regular basis. rather than just in the most dire of cases. Those corrections will seem painless at first, but in fact the solution will have hidden costs that come up later. The cure is worse than the disease.

By making course corrections the exception, and not the rule, I think we will be better off overall. Moreover, it will minimize tutorial rot and other things, because any feature that is heavily or widely used will not be eligible for being changed (whereas with opt-in, we can make changes in a much wider range of circumstances).

Major versions

Now, of course, this proposal has to fit into a larger strategy on versioning. In particular, we’ve not really addressed how to make backwards incompatible extensions. Nor have we addressed when it makes sense to issue a new “major version” of Rust (e.g., Rust 2.0), and what kinds of changes we might do as part of Rust 2.0. Here too, I’ve been evolving my thinking. Initially, I hoped to avoid a new major version as long as possible, so as to ensure that code kept compiling for as long as possible. But I now think that this is leaving an important tool in the toolbox.

Major version numbers should be our way to signal “chapters” in Rust’s history. These chapters might include backwards incompatible changes, but they do not have to, and in fact ideally they would not (because we want code to continue compiling whenever possible). Rather, Rust 2.0 can serve as a signal that a lot of great stuff has happened since Rust 1.0, so if you haven’t been paying attention, it’s time to take a look. Put another way, even if code from the early Rust 1.x days still compiles, it should feels dated when Rust 2.0 is released, because it is not taking advantage of new features and new idioms.

I think, in practice, releasing a major version of Rust will feel analagous to the infrequent releases of other programming languages, such as the change from C++11 to C++14 or Java 1.4 to Java 1.5. The main difference from how other languages do things stems from the train model. This means that we can introduce those new changes more gradually, in the 1.x line, and only declare 2.0 when the full set is available. If some of the 2.0 features are backwards incompatible, this implies that sort of feature-based opt-in during the 1.x line (analogous to Python’s from __future__ import).

The advantages I see of this model is that it:

  1. promotes regular major version releases, which give us a chance to highlight the exciting work that we’ve done (we can’t really expect Rust users in the future to be closely following the new features in every minor release, much less prospective Rust users);
  2. gives a simple mental model for versions of the language, where "Rust 2.x" code indicates major shifts;
  3. doesn’t require people to tag their code with fine-grained version numbers like 1.17, which I foresee being a constant annoyance. It might be necessary (or useful) to tag with “Rust 2.x”, but that seems much easier to understand.

Let’s also remove deprecated things from major releases as a bonus.

1 Like

@brson are the binaries from that crater run available?

I decided to advance this to a full RFC:

These are the new binaries:

before the patch (ef72938a8b9171abc5c4b463d3e8345dc0e603a6):

after the patch (8e70eb1bfc83cdc776ab0ac1cc67f75fbbe14f62):

Warning: Anectdotal evidence.

From irc yesterday, touching upon this issue.

Just the case that a user had code that would have compiled under your new rules, but as it was now the user was stuck due to inconsistent lifetimes when using Box<T> in different contexts.

Irc log:

Old code:
Fixed to:

Regarding your ‘Why not opt-in?’ argument, I’ll try to respond point by point:

  • If we want to avoid bit-rot in tutorials, stackoverflow and people’s memories, we only have two options: Either edit the tutorials and stackoverflow answers, and write article to jog people’s memories, or avoid changing the language at all (which isn’t a useful option in my book). I severely doubt that breaking how something works, even in trivial ways, will reduce the confusion as opposed to making the change a concious decision. Just calling opt-in “something of a mirage” doesn’t make for a compelling argument.

  • As for fragmentation, let’s have a look at how other languages handled it. Java, currently at the top of many indexes is certainly evolving – they have just added closures and are in the process of modularizing their stdlib; I believe Value Objects are on the horizon for 2017ish. Yes, they are moving slowly and mindfully, and they keep full backwards compatibility. They enjoy their lunch, even if it isn’t free at all – after all, someone has to maintain all those APIs, even if they are deprecated.

    Now let’s have a look at Python, which before 3.0 also committed to minimal breakage. This is an interesting case, because we see both strategies (even future-opt-in via from __future import ... vs, breaking just about everything with a new release). Python 2 is still going strong in many places, while python 3 is only recently gaining traction. I think it is safe to say that the Py world is seriously fragmented currently, and it is only due to the strength of their community that they can keep moving forward over the chasm they created.

  • Yes, some people will prefer to run older versions (especially in large enterprises, where every update must be vetted). This also is a problem for Java, where Android’s version is still on version 6 while the rest of the world already enjoys Java 8 (and I’m currently testing early access version 9).

    We can somewhat alleviate that by having rustc trying to build future (in relation to its own version) code on a best-effort basis and emitting a warning. Also cargo could “hold back” versions (which may work if the library follows rustc development, so older releases will probably target older rustc). Finally we may want to give people a story on how long we are going to support older versions, to give them an incentive to update their code. Otherwise, I suspect library authors will probably be the driving force here.

So short story short, I’m not convinced that the lunch is a mirage, even if I agree that it’s not exactly free. But please let’s not throw out the baby with the bathwater here, OK?

Please note that I’m pretty relaxed about this particular change. I just want it to be motivated better. Rust is now in the open. We have a public image to maintain.

1 Like

This makes me scared. If we allow ourselves to do breaking changes based primarily on that "RFC design seems suboptimal in retrospect", then that is also a signal that the language is still unstable, and that will in turn reduce the amount of code people write in it. As such, even if the change would improve the language, the net result is still detrimental.

That said, the Rust promise is, IIRC, "upgrade with a minimum of hassle", rather than "won't ever break". I'm not sure whether this change is a minimum of hassle or not, but "the error messages produced as a result of this change are not especially clear" seems to point in the direction that it isn't.

Hence, my vote goes for improving the compiler error for the current situation, so it becomes easier to understand that you need to add +'static in one place but not the other, rather than doing a breaking change.

1 Like

As nikomatsakis is apparently busy making the compiler faster :smile:, I’m going to step in here and flesh out the motivation for the breaking change:

Please let us all remember that the goal of stability is to keep breakage minimal. Breakage in this context is defined as some build (or worse, program) failing that would otherwise have passed. Breakage as a risk has basically two dimensions: 1. Cardinality, as in how many builds break, and 2. Severity as in how much work will have to be expended to find and solve the problem? (That’s why a failing program is worse: It is usually harder to find the problem)

@diwic Please don’t let your fear of breakage cloud your judgement. There probably will be some breakage sooner or later. Even a language like Java (which leans quite heavy on the conservative site) has to live with it. Asking the team to stop changing the language for loss aversion is just bad policy.

Thus, our question becomes: Is the breakage that would result from keeping the current setup plus improved compiler error smaller than the breakage resulting from changing the language; also including a suitable compiler error and documentation (perhaps referring to the old setup and how it was changed).

Based on the analysis from @brson and @DanielKeep , we can deduce that the cardinality of breakage for this change is probably low. I think the severity for those few who rely on the old rule can be minimized by suitable compiler diagnostics: if I understand it correctly, relying on the old rule would result in a compiler error under the new rule. Thus the cost of finding the error is negligible. If the error message is as useful as I’ve come to expect from rustc, I believe the cost of fixing it will be reasonably low.

On the other hand, there have been recent reports of people tripping over the current setup, which is, as nikomatsakis points out, unintuitive. Thus there already is some breakage, that will only increase in cardinality, as more and more people discover the feature. Even if we would improve the compiler diagnostics, this setup would at best reduce the severity of the resulting breakage, not the cardinality.

Thus, changing the rule and adding sane diagnostics (plus a googleable blog post / documentation of the change) will reduce breakage even more than just adding better diagnostics. Therefore, my vote goes to implementing the change. :+1:

1 Like

Well, what we don’t know, if how many will trip in the other direction if the RFC is implemented. I e, for every lifetime that defaults to 'a we have some people tripping because they wanted 'static, and for every lifetime that defaults to 'static, we have some people tripping because they wanted 'a.

I’m hoping that in the future, we’ll be able to infer more of the 'a:s, so that you don’t have to print them at all. Therefore I prefer defaulting to 'a rather than 'static, as a rule of thumb.

Also, I think breakage should be defined as is in “the code compiled on Rust 1.0 but no longer does on 1.2”, rather than “people need to google to find out how to do it”. The latter is not breakage, it’s just an inconvenience.