Similar reports compiling hyper: no perceivable difference. Same number of seconds.
So a status update. Before leaving for vacation, I did manage to get an EC2 instance up and running and to collect data running tests across all of crates.io – but I haven’t manage to collage that data yet. In case anyone wants to experiment, my test runner is adapted from @jntrnr’s repo which in turn is a bastardized version of cargo apply (by @alexcrichton), I think:
It’s not well documented yet but you can do things like run tests/benches across selected packages. Unfortunately the benchmark results aren’t really usable yet as the timings are not accurately reported.
To clarify, I do plan to put in some time to collate those results over next week or two – but I haven’t quite scheduled it in yet.
I have a silly question: How do you enable MIR for just one project?
I wrote a
.cargo/config file with the following content:
[build] rustflags = ["-Zorbit"]
Is there a more direct flag or option in the
You could put the
.cargo/config in your project’s directory.
project/ Cargo.toml .cargo/ config
I did do that, I was not sure if it was “the way to go” or if there were a option directly in Cargo.
As I keep thinking about it: Since the long term plan is to switch so MIR completely, it may not be such a great idea to introduce a new option to Cargo.
Just a note that I added this milestone to track blocking issues:
Over the weekend I tested rustc 1.12.0-nightly (7ad125c4e 2016-07-11) for target x86_64-pc-windows-msvc against all crates.io using cargo-apply, without and with
-Z orbit. The results where interesting, but good! The only case I can’t explain is the following:
- The failure cases were different! pre-orbit many of the failures were building the openssl crate. Post-orbit they failed building openssl-sys-extras. Why would this be?
Before/after diffs of success cases. Before/after diffs of failure cases. These are hard to read because the tool isn’t very sophisticated. The success diff is much easier to read than the failure diff because of all the openssl differences.
In the data were several weird false positives:
- codify, encoding_rs, quine, static_assert_macro - false positive network timeout
- serde/codegen/macros - weird ‘unable to update files’ errors before, ok post-orbit
- oak_runtime - failed to compile before, compiles with orbit! This was a result of the before run choosing the broken 0.3.7 and after 0.4.0.
- aho-corasick, memchr, regex-syntax, utf8-ranges - cargo panics post-orbit! Need to investigate. cargo-apply runs cargo as a library so there may be some unexpected behavior due to that, but this is quite suspicious. And indeed, a cursory run of
RUSTFLAGS=-Zorbit cargo test --releaseon the affected utf8-ranges and regex-syntax repos seem to work just fine. cc @alexcrichton weird panics in cargo-apply.
With all the openssl failures we would get significantly more coverage if openssl was installed.
Cargo is nondeterministic in the order it compiles crates and only officially prints the first thing that fails to compile. I believe the build scripts of openssl and openssl-sys-extras may be getting compiled and/or run in parallel, meaning either one could have failed first, causing different failures there.
Unusual! Locally though it looks like
cargo test also works for me on
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.