I'm very much in favor of a custom tool for the Rust project to track such metrics.
It's a general notion I see all over the place that a generic view of "project tracking" doesn't exist - the tool should follow the focus of the organisation.
So, my very quick reaction to the tool is: great!
FWIW, I avoid the word "health" for such investigations: a project may very much be "healthy" if some of the corners are dirty, while the common intuition around "health" draws from medicine and does give a lot less leeway.
My biggest feedback is though that such tools are sharp and careful to be used.
It may help if I give some context from previous Rust leadership experience so that it isn't lost. Early on in the Rust project, we had access to a tool called "Cauldron". Looking at it exposed a number of problems in that tool use:
We were aware of most of the (non)-problems it has shown
We were aware of crucial problems that the tool has not shown
In all the metrics of the tool (and other metrics we looked at), we found phenomena that were hard to explain - such as a sudden increase in crate growth rate
The tool doesn't show things that don't exist yet at least to our awareness, but are desireable
This has led to the joke at one all-hands that we need a "crystal ball working group".
(1) is conceptually the easiest problem to deal with: in case of non-problems, the metric should be silenced and only be shown if theres a relevant change. The problem here, drawing from my background as a search and dashboard consultant: agreeing on what is "relevant" and defining it in software is a laborous process.
(2) is similar: if the problem is tangible, a metric should be designed and inserted. The same problem as in (1) applies.
Both of these combined point to the problem that such a tool needs to be easily changeable and flexible (btw. one of the reasons why many companies still rely on spreadsheets). A meta-metric for the use of such a tool is how often the gathered metrics change (or are even removed). This also maps to an effect called "dashboard fatigue": an ever-extended dashboard with all green lamps quickly looses relevance. The same is for a dashboard where a metric is tracked that is currently not being improved. How to deal with this is very much bound to the organisation, and it's daily state.
(3) Is also interesting: it needs a lot of experience and research, potentially talking to people. Solutions here are surprisingly simple though: often, a collection of notes of interesting things we have seen and a group of people interested in investigation. The solution here may literally be over a drink in a bar.
(4) Is something to be keenly aware of, as it shows the boundaries of such a tool. And those boundaries are large. That means that users of such a tool need to be strongly incentivised to turn away from it regularly and do other methods of research (surveying, user interviews, industry research).
The tl;dr may sound trivial, but usage of such tools needs a lot of education and we need to find a way to provide that education to users. Also, it needs a lot of work and brain cycles. This is not at all an objection or a warning, if we can achieve that, we give users a valuable skill and boost the organisation.