Enabling MIR by default


Eventually yes, but there’s no reason to take this step in a rush, when we can first work to mine data across a large corpus of Rust code like crates.io. Passing that bar first reduces potential pain/churn in the nightly ecosystem, and has already been serving well to spotlight issues with the transition.


Fair. Performance wise, I assume Crater mainly gives information compile-time performance, and perhaps some run-time performance information from benchmarks and test?


I will try and get something running for correctness here. I agree it’d be good to cover windows better.


crater gives no information about timing of any kind.


I’ve been wondering how best to gather timing numbers. I’ve adapted a script from @jntrnr that can run and scrape timing information across all of crates.io, but I feel a bit nervous about building arbitrary crap from crates.io on my computer – seems like a security risk to do so, given that it may involve executing arbitrary code.

I can use an EC2 instance, but I’m not sure whether the numbers that would result are reliable in any way. Seems unlikely. The same might apply to a VM – though I’d expect the figures from a VM to be relatively ok, just a higher margin of error.

Perhaps a new user on my laptop would be a good compromise.



You could run the tool while booted from a live usb environment?


This seems like a good bet, and will work for at least linux, you would want to do it while you are not connected to the Mozilla network. For macs I can sacrifice one of the decommissioned bots, but we’d want to be careful about its network connection - alternately we may be able to temporarily re-image a macstadium machine (which is off the mozilla network), or commission a new one of those. Not sure if Windows has live images for this.


I prefer to use a VM manager. You could install Qubes (qubes-os.org) for Windows/Linux - a dedicated Xen provides fairly reliable timings.


Yeah, of course. VM is secure enough.


Not the most scientific benchmark, but Diesel’s test suite seems to be compiling at around the same speed with MIR. 77.01s baseline, 76.81s with -Z orbit. Averaged over 3 runs, difference between fastest and slowest was ~1s in both cases. Anyway enough to say there appear to be no regressions there. (Full command was cargo clean && time cargo rustc --no-default-features --features="sqlite unstable" -- -Z orbit) Nice work!


Compiling my personal project. Without MIR, 9m2.537s. With MIR, 7m52.177s. :thumbsup:


Similar reports compiling hyper: no perceivable difference. Same number of seconds.


So a status update. Before leaving for vacation, I did manage to get an EC2 instance up and running and to collect data running tests across all of crates.io – but I haven’t manage to collage that data yet. In case anyone wants to experiment, my test runner is adapted from @jntrnr’s repo which in turn is a bastardized version of cargo apply (by @alexcrichton), I think:

It’s not well documented yet but you can do things like run tests/benches across selected packages. Unfortunately the benchmark results aren’t really usable yet as the timings are not accurately reported. :frowning:


To clarify, I do plan to put in some time to collate those results over next week or two :slight_smile: – but I haven’t quite scheduled it in yet.


I have a silly question: How do you enable MIR for just one project?

I wrote a .cargo/config file with the following content:

rustflags = ["-Zorbit"]

Is there a more direct flag or option in the Cargo.toml?



You could put the .cargo/config in your project’s directory.



I did do that, I was not sure if it was “the way to go” or if there were a option directly in Cargo.

As I keep thinking about it: Since the long term plan is to switch so MIR completely, it may not be such a great idea to introduce a new option to Cargo.


Just a note that I added this milestone to track blocking issues:


Over the weekend I tested rustc 1.12.0-nightly (7ad125c4e 2016-07-11) for target x86_64-pc-windows-msvc against all crates.io using cargo-apply, without and with -Z orbit. The results where interesting, but good! The only case I can’t explain is the following:

  • The failure cases were different! pre-orbit many of the failures were building the openssl crate. Post-orbit they failed building openssl-sys-extras. Why would this be?

Before/after diffs of success cases. Before/after diffs of failure cases. These are hard to read because the tool isn’t very sophisticated. The success diff is much easier to read than the failure diff because of all the openssl differences.

In the data were several weird false positives:

  • codify, encoding_rs, quine, static_assert_macro - false positive network timeout
  • serde/codegen/macros - weird ‘unable to update files’ errors before, ok post-orbit
  • oak_runtime - failed to compile before, compiles with orbit! This was a result of the before run choosing the broken 0.3.7 and after 0.4.0.
  • aho-corasick, memchr, regex-syntax, utf8-ranges - cargo panics post-orbit! Need to investigate. cargo-apply runs cargo as a library so there may be some unexpected behavior due to that, but this is quite suspicious. And indeed, a cursory run of RUSTFLAGS=-Zorbit cargo test --release on the affected utf8-ranges and regex-syntax repos seem to work just fine. cc @alexcrichton weird panics in cargo-apply.

With all the openssl failures we would get significantly more coverage if openssl was installed.


Cargo is nondeterministic in the order it compiles crates and only officially prints the first thing that fails to compile. I believe the build scripts of openssl and openssl-sys-extras may be getting compiled and/or run in parallel, meaning either one could have failed first, causing different failures there.

Unusual! Locally though it looks like cargo test also works for me on -Zorbit