Regression report stable-2015-05-15 vs. nightly-2015-05-28


There is only one unique regression here related to the #[packed] attribute. Two of the failures are bogus network errors.

I was quite surprised that there aren’t more regressions yet. Perhaps they were covered up by network failures, but there were a lot of successes.


Just curious, how is rank computed now, it’s not # of downloads?


I believe its number of reverse dependencies.


@bluss It’s the # of transitive reverse dependencies currently.


I noticed that the cpal crate is in the “Broken” category, and the error is suspicious:

Could not find Cargo.toml in /home/crate/alsa-sys

alsa-sys is a path dependency of cpal (see its Cargo.toml). alsa-sys alone is in the “Working” category.


I have lots of regressions, just not in published crates. At the moment, none of the code I have based on timely dataflow (including some data-parallel join code, and differential dataflow) builds.

For some reason, the

Global is external, but doesn't have external or weak linkage!
i32 (%closure.152*, { i32, i32 }*)* @_ZN12reachability12closure.7093E
invalid linkage type for function declaration

error is cropping up again with nested closures. This was an issue a while back, with an excellently short reproduction which still causes LLVM to abort:

Without sounding like an ingrate, it would be neat if some of these things could be fixed up, or put in the test suite, or whatever needs to be done to make some forward progress. I’d help out if I thought could (I have been sending regular ICE reports).

I had been doing Rust evangelism instead, but it is hard to do when the code I put up in blog posts doesn’t run any more (and I can’t fix it). It’s currently just embarassing to show people. I’m about to fly to Cambridge where I was going to show the folks there how to do data-parallel compute on Rust, but I worry we’re just going to go for beers for two weeks instead.

It feels a little silly to put the code up on just to get the attention, but I think having some sort of broader test coverage would be helpful for the core team. This stuff used to work, I’d love to be able to fix it, but it is rotting for now.

/vent off


Not sure it’s entirely silly to put code on I’m a pragmatic and I’ve thought about writing some kind of post which says “Being on is how you vote”.

It encourages two things, A) publishing and B) being stable-compatible. In return you get a “vote” in matters of what rust code looks like and what it needs to work.

This is the pragmatic view, the strict view would say there should be no regressions at all. But we do need help to find them in the first place. This particular bug you linked, is not a regression from Rust 1.0.

Pre-RFC: Adjust default object bounds

This may have been the wrong topic to post in; you are totally right that nothing experienced is a stable-nightly regression. It did however seem apropos brson’s “I was quite surprised that there aren’t more regressions yet”.

It was instead a nightly-nightly regression; it worked for several months, and now doesn’t. I’d love to get all the code to a point where it actually works on stable (I’ve been waiting on drain to land), at which point up on it goes. I don’t know what changed in nightly, but if current nightly goes out as next beta it won’t work in the next stable either, and so still won’t go up on

It’s just a bit frustrating, is all. I guess the lesson learned is something like “if you need to use nightly, at least push test cases that work on stable to some crate on”? That was what I meant about being silly; the only thing that needs to see them is brson’s tool.


I do have plans to expand testing to code that isn’t on I’d like for authors to be able to register GitHub repositories and manage their own regression testing configuration but that’s a ways off. Also interested in a program to test proprietary code.


Nested closures not working is a sad bug, because its syntactically correct Rust that won’t compile, but regressions from stable have a special priority because the Rust team tries to make a guarantee that they will not happen.

There’s not really such a thing as a nightly-nightly regression, because there’s no such commitment for code that doesn’t compile on stable. A valid solution to this bug would be to decide that nested closures aren’t syntactically correct; on merit, this solution would probably be a rather bad one. But there’s not a guarantee that (||||42)()() will ever compile the way there’s a guarantee about code that compiles under 1.0.0.


I understand why stable-nightly regressions get special notice, because they aren’t supposed to happen. At the same time, it is a bit of a dodge to say “a thing that should work in stable, because all of its components are marked stable, need not have regressions tracked unless it was confirmed working in stable”.

From my point of view, which is not especially privileged, there is a thing that is supposed to work in stable, and indeed has been working now for a while in nightly (my thinking: “yay, it has been fixed”). Then it stopped working and it only now occurs to me that whether it works or not may not actually be on anyone’s radar.

Since you are right that whether a thing ever works in nightly or not isn’t particularly germane, maybe what would be better (edit: “than not”, vs “than stable-nightly regression tests”) is a set of tests of “things that should work in 1.0 stable, but don’t”. I don’t think the 2k issues on github are doing this.

I totally get that the regression report is just one signal, and one that should get some priority (regressions are embarrassing, and give explicit information about what broke where). At the same time, it seems that other things are still broken, I wanted to provide that input. I’ve since cut my code down to “subset of 1.0 stable that actually works”, as otherwise (as you say) no promises.


Nobody said that.

But this isn’t counted as a “regression”, it’s just an ICE or something. We track those. We care about those. We fix those.

But we don’t give them the same importance as regressions between two stable builds, because this breaks a guarantee that was provided – and in some cases convoluted workarounds may be needed to make it work again.


LIke Manishearth said, it’s not at all that ICEs don’t matter, it’s that regressions from stable have a particular importance because preventing them is the minimum baseline. Regressions between stable release do a special kind of damage: they fragment the community between different versions of Rust and disrupt peoples’ ability to trust the guarantees that have been made about the Rust language as a tool. Its about avoiding problems that have plagued other projects, like Python 2/3 and Perl 5/6.