Adding a Reliability Rating System to Crates.io

Hello everyone,

In light of the recent supply chain attacks and the appearance of malicious crates, I was wondering if it would be possible to add a reliability rating system to crates.io — something like [safe / not safe] or [reviewed by x1, x2, …].

This could provide an extra layer of security, and if a crate has a low rating, it would encourage developers to be more cautious before using it.


:artist_palette: Mockup of a Reliability Rating System on crates.io

1. Simple Visual Indicator

  • Each crate would display a small badge next to its name/version:
    • :white_check_mark: Safe (validated / well-reviewed)
    • :warning: Needs Review (few or no reviews)
    • :cross_mark: Not Safe (reports or known issues)

Example on a crate’s page:

my_crate v1.2.3  
[Badge: ✅ Safe | Reviewed by 3 trusted users]

2. Community Rating System

  • Logged-in users (via crates.io: Rust Package Registry) can give a rating:
    • Safe / Not Safe
    • Optional: comment explaining why
  • Votes are aggregated into a global score.

Example:

Reliability: 85% (42 votes)  
Reviewed by: alice, bob, charlie

3. Trust Level

Beyond raw votes, there could be trusted reviewers:

  • Experienced maintainers or ecosystem contributors (e.g., RustSec, Rust teams).
  • Their reviews would carry more weight.
Trusted Reviewers:  
- alice (RustSec team)  
- bob (crate maintainer)

4. Automatic Flagging

  • Integration with the RustSec Advisory DB:
    • If a vulnerability is published, the badge automatically becomes :warning: or :cross_mark:.
  • Automatic detection of suspicious behavior (e.g., obscure dependencies, obfuscated code, mass publishing).

5. Dedicated Review Page

Each crate would have an extra tab: Reliability

  • History of votes & reviews
  • Reasons for reports
  • Comparison between versions

:locked: Goal of the system:

  • Complement technical security (audit, RustSec) with a social validation layer.
  • Encourage transparency and accountability in the community.
  • Stay lightweight (not an “app store style” rating) but clear enough to raise awareness.

Mockup :

What do you think?

1 Like

note that cargo crev, cargo audit and cargo vet are all existing tools that exist for this type of thing.

11 Likes

Here, the concept is a bit different; it is more community-oriented, but why not also include the results of these tools in the final evaluation..

lib.rs will show the results of all these tools under the "audit" tab. also note that crev is already crowdsourced

6 Likes

A percentage rating is easily gamed by malicious actors. You need some way of trusting the individual reviewers.

This is being worked I think. Although it is for consumption by the crates.io team itself, not for end users.

4 Likes

Any social validation layer needs to have moderation support to prevent spam, fake reviews, harassment, etc. Neither the moderation team nor the crates.io team has the capacity to take this on. If you're interested in this idea, contributing to cargo crev or cargo vet and using those tools is your best option.

12 Likes

I don't think this is every something that would be appropriate for the main website to say. It's like how auditory crossing signals say "walk sign is on" not "safe to cross". (Other discoverability frontends being more opinionated is good, though, so there can be different ones with different metrics and they can all compete.)

Big fan of showing other metrics that can help people spot something they do or don't want to trust, though. Things like "first uploaded recently" can help people go "wait a minute, this should be an old crate" without the UI having to distinguish between "cool new crate" and "suspicious new crate".

12 Likes

In my experience the hard part isn't the UI to show this, but getting good data to show.

Code reviews are scarce, and they tend to lag behind the latest release. So the real problem here is how to collect more reviews, which are meaningful and from trusted users?

A simple vote button would be just a popularity contest, not much stronger than GitHub stars, or the number of downloads and rev dependencies.

To make claims about actual security you have to carefully review the source code, and the result applies only to a single release. This is because even reputable crate authors could have their accounts hacked, or well-known good crates could be taken over by bad actors (like xz vs Jia Tan).

It's also very hard to decide who can be trusted to approve crates. There are 50000+ accounts publishing on crates.io. You're not going to know all these people and their reputations. It's not too hard to make a handful of sockpuppet GitHub accounts to approve your own crates.

cargo-crev has negative reviews, but this turned out to be contentious too. A negative review looks very bad (especially when the data is sparse and most crates won't have any), but some reviewers are harsher than others, and give bad reviews for minor or subjective problems.

However, rustsec integration is an easy one. That's a reputable source, and vulnerabilities are pretty objective. Just be careful not to imply that crates with vulnerabilities in the past are worse than crates that had no vulnerabilities, because it's possible to have no rustsec reports just because nobody cared to check.

14 Likes

Your opinions are equally valid, and it’s true that this feature would involve costly moderation. Otherwise, I’d like to suggest another potential improvement for crates.io regarding source code integrity and traceability.

Currently, when we run cargo publish, crates.io accepts an archive built from the local directory. This means that the published package does not necessarily match any commit in the project’s version control system (e.g., GitHub, GitLab).
As a result:

  • It’s possible to publish a crate that does not correspond to any tagged commit.
  • Users browsing the repository may see a tag v1.2.3, but there is no automatic guarantee that this tag matches the .crate file published to crates.io.
  • Auditing and reproducibility can become harder when trying to trace back to the exact source.

Proposed feature

Add an optional verification mechanism in crates.io to ensure that a published crate corresponds to a commit in the linked repository.

Possible approaches:

  • Require that the published crate content matches the code at a tagged commit (e.g., vX.Y.Z).
  • crates.io could fetch the repository and compare checksums of files between the .crate archive and the tagged commit.
  • Alternatively, Cargo could generate and submit a commit hash reference during cargo publish, and crates.io would record this metadata for future verification.

Benefits

  • Stronger guarantees that published code is reproducible and auditable.
  • Increased trust in the ecosystem (users know that what they download from crates.io exactly matches what is in the public VCS).
  • Easier security reviews and compliance in regulated environments.

Open questions

  • Should this be mandatory for all crates, or opt-in for maintainers who want the extra guarantees?
  • Should it rely on repository hosting services (GitHub, GitLab, etc.) or be agnostic and only check against provided commit hashes?
  • How should crates.io handle repositories that are private or self-hosted?

Yes, crates.io now supports trusted publishing: crates.io: Rust Package Registry

The exact artifacts that were included in the .crate archive are also browsable on docs.rs, and diffs between versions are viewable on diff.rs

2 Likes

Recording this data is already done today, for example here is a .cargo_vcs_info.json for one of my projects. There has been at least a one time research effort in using this, see 999 crates of Rust on the wall | Gnome home. At the time, cargo publish --allow-dirty would not include the data in this file but now it does.

At least one other tricky part of this is that cargo publish does not copy the files directly but moves them, rewrites Cargo.toml and Cargo.lock, etc. Now maybe we could record some of these performed transformations in .cargo_vcs_info.json. I would defer to that blog posts author for what would be additional, useful extensions. They re-ran cargo package and compared the .crates to overcome part of this but the transformation done by cargo package is not locked down.

I'd love for this to be integrated into crates.io in some form, even if imperfect. I wouldn't frame a deviation as a negative thing but instead direct people to check the deviation themselves with diff.rs (assuming it gets integrated into crates.io which I think it could serve as a important audit aide).

Not too sure what the differences are between these.

This requires us to know the tag format. In dealing with maintaining a release tool, in dealing with other tools that make incorrect assumptions about tags, I can say that this is not easy.

This also requires the tag to be pushed before publish. For when a publish fails due to a validation error, people may want the opportunity to make fixes rather than burn the release number.

2 Likes

Git tags are not immutable! So even if it matched at the time of publication, it could be changed later to hide something.

Apart from downloading the .crate yourself, I think the docs.rs source view is the most reliable way to review the actual published content.

e.g. hashbrown 0.16.0 - Docs.rs

5 Likes

GitHub is starting to offer immutable releases which might be interesting:

2 Likes

I've ran such comparison for all Rust crates. It's a messy problem.

It's technically challenging. Submodules and symlinks create very annoying edge cases (especially when mixed together). ../README paths can't exist in tarball, so get normalized, but and that complicates matching too.

But the biggest problem is assumptions about what code has been visible and reviewed in the repo.

The whole point of comparing crate tarball to the repo is the assumption that users have seen the code in the repo, so it would be difficult to hide malware there.

But it's not obvious whether that's actually true! It's possible to tag any arbitrary commit. It doesn't have to come from the main branch. Some projects have a legitimate reason to have separate release branches, but that reduces visibility of the code there. You'd rely on users specifically looking at the release commit, and I'm not sure if enough users actually do.

There are monorepos with many crates, which could have different versions. Some repos have per-crate tags with various naming schemes. It can be unclear, even for a human, which tag is for which crate. It's possible to hide code in such chaos: tag v1.0.0 where all code is good, but release one crate at helper/v1.0.0 that people can overlook or assume it's the same.

And finally the problem is that repo verification only makes sense if you can trust the repository host. In git there is absolutely no guarantee that the repo displayed to users via web browsers is the same as repo served to git. Malicious hosts could also serve a different repo to the client doing comparison (e.g. detect IP of crates.io), and users would have to compare commit hashes to notice differences. It's not a problem for GitHub, assuming you trust Microsoft, but I'm not happy about adding even bigger dependence on such central services.

So in the end, you still need users to look specifically at what has been published, not just to generally observe the main branch of the repo. If you depend on users checking code of releases, then asking users to directly check code of tarballs isn't that far off, and technically makes a lot things simpler.

Diff with the repo and its tags can still be done in addition to tarballs code reviews, to highlight code that may be suspicious, and alert users when unexpected differences appear. But lack of differences with the repo isn't necessarily a sign the code has been seen, so I wouldn't rely on it as a primary source of security, only as a double-check for the tarballs.

8 Likes

Trusted publishing is github only. Also, that is only trustworthy if you trust the hosting platform and CI configuration of the repo.

There are some trivial attacks:

  • Once support for self hosted gitlab etc is added: you could host a malicious gitlab instance. How do you determine which ones to trust?
  • Already, you could have a CI run on github that publishes something that doesn't match a commit in the repo (having the CI modify the code before publishing, publishing from a branch that you later delete, etc). A malicious maintainer would certainly be able to do that. It would leave some traces, but it would be fairly easy to hide a lot of that where it isn't obvious.

So in this context it is not a useful feature. It is useful to reduce reliance on long lived tokens which could leak. But that is all.

2 Likes

The Rust ecosystem is rich in tools; we just need to find a way to combine them intelligently to automate anomaly detection.

Why not combine what diff.rs does with an AI that analyzes these differences in the code, and, based on the crate description and the nature of these commits, determines whether they are legitimate or not?

Humans are naturally lazy (and ironically, it’s precisely because of that that there’s work to be done in computer science).

We now have a range of open-source AIs, so why not use them for tasks that are costly in human time? Sure, it would require an expensive GPU infrastructure, but it’s a necessary cost for a greater good.

Indeed, you’re right to raise this issue; the comparison isn’t always symmetrical.

Perhaps using trusted SSL certificates could solve this IP detection problem by only accepting certified repositories, but a new issue would arise: this solution would exclude self-hosted repositories.

A stricter repository acceptance policy may be necessary, requiring that only known and non-self-hosted repositories are allowed. Sometimes, too much flexibility also brings greater risks.

Stop giving hackers ideas :laughing: (But you were right to point out that possibility.)


The richness and flexibility of Rust call for stricter control over crates, since Rust source code can affect both the developer’s machine and the end user:

  • Macros + build.rs → can impact the developer’s machine, as the code is executed during the build phase (containerizing this phase can help limit potential damage)
  • Final executable → can affect runtime behavior for the end user

It’s true that this issue affects all other programming languages as well, but if the Rust community manages to find a way to secure this latent vulnerability, Rust could become not only memory-safe but also build-safe.

Hackers now, instead of spending time looking for vulnerabilities in executables, prefer to attack the source directly >>> the building blocks that compose it.

One thing is certain: after reading all your responses, the task is far from easy... :confused:

My vague hope is that we can reduce the reliance on proc-macros and build.rs by a combination of

On top of that, I'd like to see something like cackle for tracking what dependencies can do so we have a better idea of what is worth auditing.

4 Likes

That would be a tragedy further entrenching the monopoly of the big American actors. That is not the future I want to see. And it reduces security in other ways, by giving a few actors undue influence as gate keepers.

Security by obscurity is not security.

6 Likes

Crates.io is optimized for people sharing open source code to collaborate with each other. The MIT license, which many crate authors choose to use, states:

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

While we're of course going to take down malware on crates.io as we find it, I don't think we should be putting up additional barriers to people wanting to share their open source code, nor should we be using the limited financial resources we have to secure the ecosystem to the standards of corporations (who are then profiting off the free resources that crates.io provides). Ultimately, it's up to the users of crates to bear the costs of deciding to use a crate, including manual review or additional tools.

8 Likes

Something semi-related to this I think could be interesting and helpful is a first-class way in the crates.io to UI to flag that certain crates are maintained by the rust-lang organization, e.g. cc, cfg-if, libc

You can already get this information from the "Owners" section, noting that it's owned by a rust-lang team, but perhaps something a bit more first-class which denotes "official" Rust crates might be helpful as well.

3 Likes