the development of Rust as a modern language is outstanding. So many developers love it because they can get things done in Rust. Also a lot of focus is on performance and (memory) safety, but still it's hard for Rust especially in the web to become a first choice technology.
Companies are afraid of security issues. In general not only because of e.g. undefined behavior memory stuff, but because of I would call it "lacking eco-system maturity". For them there is almost no security benefit for a safe(r) memory model than so called "safe Rust code" w/o memory issues may just open a backdoor because it's a simple bug.
I thought I've read something about a working group for crate security, but I can't find it anymore. All in all I'm interested in plans for the next steps towards how Rust could be more widely used at companies so they also start contribution and improve crates or getting into the Rust community at all.
Right now it's like a catch-22 situation: how should something become mature if not commonly used? At least it's hard to discuss why sticking to Rust is a better option than other technologies, if performance is not a big deal. As a developer I would always vote for Rust, but I understand companies e.g. stay with Java and Spring than Rust and rocket.
How can we fix this or do we want to fix this ? At least I'd like to.
At least I've also found them, but for me it looks like they are more focused on lower level unsafe Rust security subjects, while I would like to see some kind of foundation that reviews and monitors some of the "well-known" crates. Not quite sure if it's case for this wg.
Indeed no; I don't think they have enough time (or personnel) to devote for that. The closest other relevant WGs or subteams I think are the Crates.io team, which ends up handling the occasional malicious crate, and the Security Response WG, which (in addition to such malicious crates, apparently) handles security of official Rust products (rustc, Cargo, the standard library, ...) — but I don't think either sounds like what you want.
I wonder whether "The Update Framework" ("TUF") would be relevant to you:
I think the best match for my search is cargo-crev. If I read it correctly they code-review cargo crates and create reports. I'll try to find out how it works and how I can maybe contribute as well.
@daaitch speaking as one of the leads of the Secure Code WG, we don't do regular audits of high-profile crates, beyond the work in the Safety Dance project, but something like that would be quite interesting if someone were willing to spearhead it.
If someone wanted to start a general code review project for high-profile crates within the WG, I'm sure we'd have quite a few interested contributors who could create audits to submit to something like cargo-crev.
Perhaps something like a "crate of the month" club where we have several people look at the latest releases of high-profile crates?
It might also be nice to encourage cargo-crev reviewers with the capacity to give a review to crates of the week.
Some sort of dashboard for "crates which want for reviews" could also be interesting; weight crates by factors including lib.rs rank, rdeps from reviewed crates, confidence in existing reviews, etc. You could even potentially gamify it by (opt in) ranking reviewers by some combination of reviews done, thoroughness of reviews, impact of reviews (using the previous), and trust in reviews (via the existing web of trust model to weight how much ranked reviewers trust other ranked reviewers).
There's definitely space for more interesting developments here.
And it will become even more complicated than just having like a community make code-reviews. Regarding the reviews, reviewers and the "trust-levels" there are different perspectives of security like one crate might be perfectly okay for web development, but bad for critical systems development.
Security is often very focused on attacks or in systems languages undefined behavior overflow/timing attacks and so on, which is perfectly fine, but it actually starts much earlier I think. Someone using a library incorrectly due to bad documentation lead to DoS, because a panic hasn't been documented or it doesn't make sense at all to panic for the library is maybe more likely for a security issue.
I thought of a crate review roadmap, for example:
stage-0: basic checks
seriousness evaluation: just a placeholder crate? no features? almost empty? might also be interesting for crates.io team to check
malicious code crate scan: like "what stands on the crate is in the crate?" Almost all Rustaceans can contribute with a thoroughness: none review and it already helps a lot.
stage-1: external view checks
user-land pub API evaluation
components and API
error handling
documentation quality checks
panic documentation
blocking code hints (for usage in async execution)
O(..) runtime complexity hints
stage-2: feature checks
integration
RFC implementation and commenting
best practise
dependency checks
using well-known crates over homegrown code, vs.
transient dependencies necessity evaluation
unsafe usage evaluation, SAFETY documentations
stage-3: encryption / sensitive data handling checks
encryption/hashing strength
credentials/secrets handling
other vulnerability checks
stage-4: critical system checks
oom_panic
other critical system requirements
Other outcomes:
security badges, defined and organized by the working group
dev-crate: okay for development tasks, offline usage
web-crate: okay for internet services
system-crate: okay for operating system development, e.g. driver or kernel
and more...
So depending on the needs someone could only allow stage-3 or use a web-crate as primary server implementation, or stage-4 system-crate for linux kernel development.
You should check out criticality score, a project from the Open Source Security Foundation for identifying critical repositories to prioritize when working on software supply chain security.
Evaluating for other people's abstract use cases is really hard. You're better off doing secure code auditing looking for common CWEs. So far, ANSSI-FR has their own Rust secure coding guidelines. If you're auditing unsafe Rust, then Miri runs (where possible) and an expectation that unsafe be justified with explanations in the code would help as well.
In the context of the RustSec project, we've had some complaints about our use of CVSS as vulnerability scores for libraries, stemming from the larger problem that CVSS is widely acknowledged as being poorly suited for libraries.
There has been some interest in using the Exploit Prediction Scoring System (EPSS) as an alternative method of estimating severity which is more applicable to the library use case: