I think the best way to approach this would be to try and model it in terms of probabilities. A rough outline of a design could look something like this:
On crates.io, every user account has a trust score in the range (0.0, 1.0) indicating how likely that person is to be non-malicious. By default, every new user account has a trust score of, say, 0.99. Rust team members would have a higher score of, say, 0.999. In cargo’s global config you’re able to override these trust scores and say that you trust (eg.) @Manishearth by 0.999.
Users are able to endorse each other. I can tell crates.io that I trust @bob by b, then anyone who trusts me by x and would otherwise trust @bob by y would now trust @bob by max(y, x * b). (Or maybe some other formula. We’d need to do the math / game theory to work out what the correct probability would be)
When you build a crate, cargo looks at all the authors in the dependency tree and multiplies all the scores together to get the probability of the crate not containing any malicous code.
Every user account on crates.io then also has two reliability scores in the range (0.0, 1.0) indicating how likely a given byte of safe/unsafe code is to not accidently introduce an attack vector. New users get some default scores, same as with trustworthiness scores, and the default safe reliability score is significantly higher than the default unsafe reliability score.
When building, cargo looks at the amounts of (safe/unsafe) code introduced by each user and uses this to calculate to probability of there being no security flaws in the code.
Users can also review crates and endorse them as being reliable. If the crate author has a reliability score of a, and the reviewer has a reliability score of r, then the code in the crate inherits a score of a + (1 - a) * r. This allows multiple reviews to “stack up”, with more reliable reviewers adding more to the reliability score of the crate.
Cargo has global config settings of minimal trustworthiness and minimal reliability which restrict what you’re allowed to build/install. It also has the ability to give you a read-out of the scores of a crate and audit them to see how they were calculated.
In addition to all this, I’d kind of like to have a trusted subset of Rust which disallows unsafe code and disallows depending on crates which contain non-trusted code. The standard library would be non-trusted but there could maybe be an alternate, trusted standard library which doesn’t contain things like File. Trusted crates could then be depended on without effecting the trustworthiness of the depending crate.
Edit: Actually, since we’re making formal models of Rust, we could integrate
cargo/crates.io with some formal verification tool and allow users to publish
correctness proofs with their code. This could be a good to alternative to
"trusted" Rust.