Incremental Compilation Beta


The current approach does these things non-incrementally, so effort will be O(n) where n the size of the dependency graph / code base. We don’t have any immediate plans to change this, but it’s still looking like it’s taking way too much time! We’ll have to profile to find out more and hopefully reduce the time.


n is particularly large in my benchmarks above. The output of --pretty expanded for Servo’s script crate is 657k lines, 47 MB. -Zincremental-info counts 1252 modules. (A lot of this code is generated.)


That’s an interesting data point. The lower bound for the effort an incremental computation solution has to spend is O(size of change). Maybe we can move more towards that.


Of course something has to be O(size of everything) at least to find out what changed, but reducing the constant factor of that part would be nice, yes.


It has come to my attention that official Nigthly builds enable LLVM assertions which significantly affect compiler performance, while the default configuration when building rustc from source does not.

This means that my previous benchmarks compare timings with more than one parameter varying (different compiler version with incremental-compilation-related performance fixes, and different configuration with and without LLVM assertions). These results are therefore worthless.


What is (roughly) the perf difference w/ LLVM assertions?

I’ve been struggling with some pathological compilation times for Imageflow, but so far most bugs can be traced back to use of HashMaps in Cargo. Could we perhaps purge HashMaps as sources for Cargo fingerprints? The randomized order somewhat interferes with the creation of a meaningful hash (when more than 1 element exists, as can happen). I have pushed (my first) straw-man PR to sort values in Cargo:

I’d be wiling to undertake such a refactoring with some guidance (perhaps comment on the PR above, as part 2 of this question is off-topic).


I’ve measured this today. An official Nightly build (which apparently has LLVM assertions) takes ~20% more time to do the same compilation than the same rustc commit built locally (with the default config where these assertions are not enabled). (This is a single data point: one set of crates on one machine, timed once with each compiler.)


New findings:

  • LLVM assertions are even more significant in release mode.
  • With a hot incremental compilation cache, compiling in release is faster than debug mode.

I’ve changed my config for Servo development to use release mode, incremental compilation, and no LLVM assertions. (And made that last one the default.) All of these add up. Together with buying a faster CPU, I’ve reduced the cycle time in my typical workflow by an order of magnitude (almost down to a single minute, from over a dozen minutes). Many thanks to everyone who’s worked on this!


These assertions, how do you disable them? Do you have to build llvm yourself?


If you use official compiler builds (such as from rustup), they are disabled in the stable and beta channel, enabled on Nightly. If you build Rust yourself from sources (which includes building LLVM), they are disabled in the default configuration.

The Rust team recently added builders to their Travis-CI configuration to also make Nightly compilers with LLVM assertions disabled, for a few host platforms. These builds are published at, where SHA1 is the hex hash of a merge commit into rust master, and HOST is a host “triple” like x86_64-unknown-linux-gnu.

However, rustup doesn’t know about these builds so you’d have to download them yourself. (You can use rustup toolchain link after that.)

If you build Servo, you have nothing to do as I’ve made these new compiler builds the default. (Except for some of the CI builders where we enable assertions back, in case they help catch a compiler bug.) Servo has had its own bootstraping mechanism (written in Python) for downloading a Rust compiler since before rustup existed.


Why? A lot of systems just use file/directory modification times to detect which, if any, files were modified.


That still needs to look at the modification time of every file :slight_smile: Cargo already does this to skip recompiling entire crates. But it’s more difficult within a crate (or a least I assume it is) since the dependencies between source files can be complex in how they relate to compilation outputs.


In Rust Language Server model, IDE already knows what changed, so in theory “to find out what changed” is unnecessary.


To paraphrase this more concretely, one can watch filesystem for changes, this gives O(change), but requires a long-living process and O(N) setup.


Is it possible to apply incremental compilation only to crates within the workspace?

Travis often times out (at 45m) when running cargo. I’m wondering if incremental compilation + travis cache could help.

How good is automatic cleanup of the ‘incremental’ folder? Unbounded growth would be a problem.


I’m not sure this is exactly what you’re asking for, but you can add cache: cargo to your .travis.yml.

I’ve seen huge speed ups on Travis before by not having to recompile all dependancies for every build.


It should be pretty good. It does a garbage collection every time the compiler is run in incremental mode.