Call for testing: Cargo sparse-registry

The Cargo nightly sparse-registry feature is ready for testing. The feature causes Cargo to access the crates.io index over HTTP, rather than git. It can provide a significant performance improvement, especially if the local copy of the git index is out-of-date or not yet cloned.

Overview

To try it out, add the -Z sparse-registry flag on nightly-2022-06-20 or newer build of Cargo. For example, to update dependencies:

rustup update nightly
cargo +nightly -Z sparse-registry update

The feature can also be enabled by setting the environment variable CARGO_UNSTABLE_SPARSE_REGISTRY=true. Setting this variable will have no effect on stable Cargo, making it easy to opt-in for CI jobs.

You can leave feedback here on the internals thread.

If you see any issues please report them on the Cargo repo. The output of Cargo with the environment variable CARGO_LOG=cargo::sources::registry::http_remote=trace set will be helpful in debugging.

Details

Accessing the index over HTTP allows crates.io to continue growing without hampering performance. The current git index continues to grow as new crates are published, and clients must download the entire index. The HTTP index only requires downloading metadata for crates in your dependency tree.

The performance improvement for clients should be especially noticeable in CI environments, particularly if no local cache of the index exists.

On the server side, the HTTP protocol is much simpler to cache on a CDN, which improvies scalability and reduces server load.

The Cargo team plans to eventually make this the default way to access crates.io (though the git index will remain for compatibility with older versions of Cargo and external tools). Cargo.lock files will continue to reference the existing crates.io index on GitHub to avoid churn.

The -Z sparse-registry flag also enables alternative registries to be accessed over HTTP. For more details, see the tracking issue.

37 Likes

If you give this a try and see performance improvements, those of us who have been working on this for years would love to hear about them! If you see it not make a significant difference, we will appreciate the information. If you see significant degradation of performance, or incorrect behavior, that feedback would be critical. Incorrect behavior is probably a problem in Cargo. Bad performance is probably a problem with how crates.io is serving the index. Both projects have new code related to this initiative that could be buggy or misconfigured, so your feedback is required!

So far all the anecdotes we received are that this makes an enormous practical improvement. Hopefully as user feedback starts coming in here that continues to be true.

11 Likes

Regrettably its hard to give any performance numbers with certainty, because our builds exhibit pretty large variation in run time (ah the joys of shared computing infrastructure) + caching, but anecdotally historically we've seen runs with very large amounts of time (several minutes) updating the crates.io index, and now it should take 0 seconds.

Implementation was trivial Try using sparse registry for cargo by alex · Pull Request #7364 · pyca/cryptography · GitHub

2 Likes

It's very noticeable improvement especially over high latency network. Great job!

Maybe I'm just an idiot, but where can I find Cargo nightly-2022-06-20? In a fresh docker container, running rustup toolchain install nightly, followed by cargo +nightly -V gives me 8d42b0e87 2022-06-17, rustc is at 2022-06-21 then. rustup update nightly informs me I'm already on the latest version.

Thanks!

Try rustup set profile minimal. If you have miri installed or some other rust tool that isn't available in every nightly, then rustup will downgrade the nightly.

Thats it, thanks! Edit: I lied, it is not. Still cannot get further than 2022-06-17 for cargo:

root@d110ed58fa77:/# rustup set profile minimal
info: profile set to 'minimal'
root@d110ed58fa77:/# rustup update nightly
info: syncing channel updates for 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.63.0-nightly (dc80ca78b 2022-06-21)

info: checking for self-updates
root@d110ed58fa77:/# cargo +nightly -V        
cargo 1.63.0-nightly (8d42b0e87 2022-06-17)

So the status is that the 2022-06-21 toolchain has 2022-06-17 cargo. I can confirm the same for x86_64-pc-windows-gnu:

❯ rustup update nightly; echo; rustc +nightly -Vv; echo; cargo +nightly -Vv
info: syncing channel updates for 'nightly-x86_64-pc-windows-msvc'

  nightly-x86_64-pc-windows-msvc unchanged - rustc 1.63.0-nightly (dc80ca78b 2022-06-21)

info: checking for self-updates

rustc 1.63.0-nightly (dc80ca78b 2022-06-21)
binary: rustc
commit-hash: dc80ca78b6ec2b6bba02560470347433bcd0bb3c
commit-date: 2022-06-21
host: x86_64-pc-windows-msvc
release: 1.63.0-nightly
LLVM version: 14.0.5

cargo 1.63.0-nightly (8d42b0e87 2022-06-17)
release: 1.63.0-nightly
commit-hash: 8d42b0e8794ce3787c9f7d6d88b02ae80ebe8d19
commit-date: 2022-06-17
host: x86_64-pc-windows-msvc
libgit2: 1.4.2 (sys:0.14.2 vendored)
libcurl: 7.83.1-DEV (sys:0.4.55+curl-7.83.1 vendored ssl:Schannel)
os: Windows 10.0.22000 (Windows 10 Education) [64-bit]

However, cargo 2022-06-17 from the 2022-06-21 toolchain does in fact include -Z sparse-registry support (I just tested that it works).

(The 2022-06-20 toolchain noted in the OP is notable for fixing miri, which was broken in 2022-06-17, which would block nightly miri users from upgrading yet... modulo off-by-one errors of course because IIRC the toolstate date is the build date but the tool version date is the date of the last commit (i.e. the previous day to the build).)

If you get

> cargo +nightly -Z sparse-registry update
error: unknown `-Z` flag specified: sparse-registry

then it is to old. If it runs then you are new enough.

rustc 1.63.0-nightly (dc80ca78b 2022-06-21) 
> cargo +nightly -V 
cargo 1.63.0-nightly (8d42b0e87 2022-06-17)

seams to work for me.

1 Like

Congrats to all those who worked on this! The git index for registries is really deeply baked into Cargo and implementing this in a way that works well was no small feat.

I've been playing around locally with this a bit and so far I've got nothing much to report other than it's blazing fast and the few times I wait for registry git updates I try instead to use nightly instead with this flag.

Some concrete things I've tested are:

  • Running cargo update produced the same diff from a base lock file with and without a sparse registry (as expected)
  • Running cargo update on a fully updated git index too 0.3s and the sparse registry took 1.0s. (of course if I didn't already have the git index it would be massively slower)
  • I poked around in debug logs to confirm that http/2, pipelining, and all the fancy bits are in use when downloading index descriptions
  • Running cargo fetch with everything already fetched (e.g. stressing how fast the "cache is full" path is) takes 0.24s for the git index vs 0.15s for the sparse index. When I switch, though, the first run is consistently 0.4+ seconds. (are the internal caches Cargo uses keyed on the sparse-vs-not key to cause this perhaps?) This isn't really an issue just wanted to point this out.

As a question I see that the index files are fronted by CloudFront with a max-age header of 10 minutes (if I'm reading that right). When a crate is published does that issue an invalidation or would we have to wait up to 10 minutes for the crate to show up in the index?

7 Likes

When a crate is published does that issue an invalidation or would we have to wait up to 10 minutes for the crate to show up in the index?

Currently you'd need to wait up to 10 minutes. I filed an issue on crates.io to track adding invalidations.

Edit: Wait is now 1 minute.

10 minutes is not acceptable. Thanks for bringing this to our attention! When we have invalidations the ttl should be way longer than 10 minutes, probably one day. Before we have invalidations the ttl should be like 1 min. PR to fix Edit, it is now fixed. CDN will refresh once per min.

This feature is wonderful and thanks to all that worked on it! Using Rust on some memory constrained arm64 devices was so painful due to the massive memory footprint of updating crates.io.

I've tried out this feature on my workflows and it works wonderfully!

3 Likes

A 1 minute latency would probably mean that publishing a set of interdependent crates from a workspace would probably need to use --no-verify when using the sparse registry. Not sure if it'd be easier to crank that latency down sufficiently or directly support batch publishing.

There's a plan to make crates-io purge CDN cache when a crate is published. Alternatively, Cargo could be taught to request freshly-published crates with a cache-buster that bypasses CDN cache.

Accurate cache invalidation on the server side is obviously ideal, since then a dumb client (e.g. curl+sh) works.

I think there's a surprising amount of room for client cleverness, though, for client-side cache invalidation (which can ideally be used both on a local cache and to cache bust the server.

A client could theoretically attempt resolution using the aggressive cache and only cache bust for resolution failures in the aggressive cache. This would lead to the use of potentially outdated dependencies higher in the tree, but cannotCitation Needed result in any overall resolution failures[1] since dependency edges are acyclic[2].

However, this also seems like effectively just a worse version of applying the existing proposed scheme for an incremental index index along with eager lookahead network requests assuming a no-(relevant)-change response. If network speed is low enough that getting a “no change” response for dependencies impacts the resolution time, you'll probably prefer using --locked anyway. (Disclaimer: I am not a web developer.)

Conclusion: caching is interesting and as fractally complex as you want it to be.


  1. That could not otherwise occur via the normal application of lockfiles and time due to the imperfection of real-world versioning schemes, anyway. ↩︎

  2. Ignoring development/testing dependencies, anyway. ↩︎

Optional dependencies allow for loops in normal dep edges too, try this set of crates and features:

> cargo add clap@2 textwrap@0.11 num-traits@0.2.11 libm@0.2 rand@0.6 packed_simd@0.3 --features 'textwrap@0.11/hyphenation num-traits@0.2.11/libm libm@0.2/rand rand@0.6/packed_simd packed_simd@0.3/sleef-sys'

(I ran into so much fun around these sorts of issues while trying to publish crates through IPFS which requires a true DAG; I want to try using sparse-registry with this too, but my old script is using deprecated ipfs functionality so I need to spend some time to redo it).

1 Like

Seems to be working fine in our CI environment!

Cut the build time by about a minute on average, seems like a great improvement and I haven't found any issues in our conditions.

3 Likes

With the benefit of more runs, it looks to be saving about a minute per run -- so consistent with what other folks are seeing. I suspect the win would be even greater on Windows, which has worse filesystem performance.

One thing I noticed is that the fetching need to go through several loops now, previously it just fetch until 100% then done, but now when it reached 100%, then it had to fetch another time again, and the total stuff needed to fetch which keeps increasing (1 -> 10 -> 50 -> 100 -> 130) resulting in the progress decreasing, feels weird.

I feel like the total should be - until there is a definite answer to the total number that needs to fetch then only show it rather than keep increasing it.