Thinking out loud here about the source for each of these metrics.
I’m pretty sure most of these could come from the GitHub, TravisCI, crates.io, Twitter, reddit, and buildbot APIs. I can definitely dig into the perf-rustc and crates.io repos to figure out those APIs. There are however, a few of these for which I don’t see a clear path forward:
Release channel health. What’s the current release? How many days has nightly been broken?
Is there an API for releases? It doesn’t look like buildbot exposes that. Perhaps scraping the archive page?
Here’s a “release health” dashboard that Firefox rolled out frequently. It would be great to have the same for Rust.
Are there GitHub Issues tags that would provide this info? It looks like the Firefox dashboard relies on specific affected versions metadata in bugzilla. I only see generic regression labels in GitHub Issues, but I’m not super familiar.
Compile-time / runtime % change per day
I think perf-rustc is just measuring compile time. If that’s the case, I don’t think it’s a very good for runtime performance because the implementation shifts day-to-day. I don’t know if this is the only discussion, but a separate runtime benchmark suite came up in issue 31265.
Also, it looks like perf-rustc is updated pretty consistently. I assume it’s automated? Is it running on metal, or is it virtualized? I suspect running runtime microbenchmarks on a VM could be problematic.
Performance improvements/losses per contributor
Are the per-PR builds running on infrastructure with predictable performance, and are perf-rustc style numbers exposed for those? I don’t see any timing numbers on the buildbot pages, but I am not really familiar with those interfaces.
- Downloads per day
- Downloads per day per artifact
Do you mean in the context of rustup and website downloads? Is this the backend support you refer to needing?
Top error codes
Where will the rustup telemetry be stored and will it be accessible over an API?