Crates.io is at risk of wormable malware

This is a fictional story:


Two weeks ago, one developer has downloaded an "inifinite-gpt.app" that turned out to be a trojan horse. The trojan ran cargo-credentials-helper command to exfiltrate the crates-io publishing token without triggering permission prompt for keychain access.

4 hours later two of his crates-io packages had patch releases published with malicious payload added. Even though these packages had only 19 users, one of them was a dev dependency in a much more popular crate. The malicious code used a global static constructor to launch an attack whenever tests were run, even when the infected crate wasn't used directly. This gave it ability to exfiltrate other devs' crates-io tokens and spread exponentially.

Within two days the attack infected 16 more developers, 480 crates, one of which was a very popular proc-macro crate. This then gave the attackers arbitrary code execution in 13540 crates, including the crates-io server. The attackers had ability to directly manipulate crate ownership, the index, and upload arbitrary new crate releases. Potentially every Rust project and every Rust developer who updated crates-io dependencies within the last week has been compromised.


Again, this is a hypothetical scenario — cargo hasn't even released the keychain integration yet! The work on publishing notifications has stalled. There is no 2FA for crate releases.

crates-io is super useful, but I'm afraid it's just been lucky that it hasn't been targeted with something devastating like that. The easy-to-grab token, relatively silent publishing, and lack of 2FA are a recipe for an exponentially-spreading worm. I know you know these weaknesses exist, but cargo and crates-io are effectively in feature freeze and work on security improvements has been stalled for years.

28 Likes

It's something we've discussed a bunch within the Secure Code WG.

We've attempted to push for more resources for Cargo + crates.io as a result. I'm not sure what else can be done as much of the feature work in these areas is largely complete but has been sitting in open PRs indefinitely.

5 Likes

this is something I've been worried about for quite some time also, and while work to manage & cull your dependency tree (like our cargo-deny) and to audit and share audits of dependencies (cargo-vet) has been progressing well.

It is scary that core Cargo and crates.io is still lacking important base security measures and validations. Interesting to hear that some of this is already implemented but sitting as open PRs, though ofc merging and deploying out changes can be as much, if not more, work than the actual implementation.

Are there maybe some Cargo maintainers that could help take on such work? We (Embark) could totally help sponsor such work and think other companies and also the Foundation really should as well.

Does sound like a fit for the new hires to the Rust Foundation Security team: Rust Foundation - Welcoming Our New Security Engineer, Walter Pearce but regardless likely need Cargo & crates.io devs/maintainers actively involved as well.

In this position, Walter will be responsible for analyzing the code and infrastructure-level security of the Rust Project. He will provide security expertise, conduct assessments, and suggest areas for improvement for Rust and its ecosystem, including Cargo, crates.io, and more. We are excited to have Walter here to help advocate for best security practices across the entire Rust landscape and be a proactive resource for the Rust Project maintainers.

3 Likes

We (as in the Secure Code WG) brought this up in meetings with the foundation in the context of the Rust Foundation Security Team. Walter was definitely sympathetic to this particular issue (crates.io worm).

I still think it would be great if the Cargo team / crates.io team could get the funding they need to hire additional full-time member(s). Perhaps the foundation could set up some sort of bounty/fundraiser to get companies to contribute to funding needed for additional head count for those teams.

7 Likes

This seems to be fixed by

?

The bug here is existence of tokens which can allow publishing arbitrary crate. If you only do publishing via CI, and CI physically has access to a token only for the current crate, then the exponential blow-up is contained: pwning a single crate does not compromise the rest of the crates by the same author.

Which reminds me that I probably should revoke 25 of my tokens with * permissions I use to publish various crates of mine...

It does not fix it, unless everyone switches to publishing from a CI server. Developers running Cargo locally have all-powerful tokens on their machines.

1 Like

I'd say publishing locally is an anti-pattern. Just from security perspective, it means that, eg, npm malware can steal cargo tokens; more generally, it makes dev laptops a more lucrative target.

Encouraging everyone to switch their publishing to CI is a hard task though! I wonder if, some time after scope tokens are stabilized, we should:

  • provide clear guidance on how to do least privilege publishing, including snippets for popular CIs
  • one-time expire all non-scoped tokens
  • add relevant warnings and doc links when creating an all-powerful token

That is to say, it does seem to me that securing "people" workflows with 2FA and keychain is useful, but nudging people towards people-less workflows seems better bang-for-buck.

EDIT: also, my level of confidence here is about 0.6: I do belive that "don't publish from your laptop" is a first-order security improvement, but I am not a security expert myself, and would be glad to hear about current state of the art.

3 Likes

Publishing from a CI requires trusting the CI provider, and you'll find that many people don't like that and will resist such suggestions.

But even that doesn't solve the problem. Publishing from CI doesn't eliminate local machine from the chain, only changes which token is the powerful one: instead of stealing the Cargo token, malware could steal ssh keys that control the CI.

Once a worm starts infecting crates, it will be infecting CIs too, since builds are not sandboxed and CI also runs tests. So a worm can replicate itself even through minimally-scoped CI tokens!

I think 2FA and notifications are the only robust ways to mitigate worm risk.

  • Local machine compromise is devastating and pretty much game over for most protections. Except 2FA, which even in the worst case at least throttles the speed at which the worm can propagate.

  • Out of band notifications would make it very hard to mass-publish crates without it being noticed. People can typically receive email from multiple machines, and important crates often have multiple owners. Even if one owner had their laptop totally compromised, others could notice something is wrong.

13 Likes

For what it's worth, I have only ever published crates from my local computer. I could integrate it with CI, but I am skeptical of uploading a crates.io token anywhere. On the other hand, I have absolutely wanted 2FA for publishing (whether that's TOTP, WebAuthn, or something else).

3 Likes

Compromising a package already compromises everything that transitively depends on it, so ability to also publish new versions of rev-deps adds relatively little in comparison to ability of laterally publishing everything by the same user.

1 Like

A basic solution for local publishing would be to add encryption for Cargo tokens stored locally, i.e. you would get a plaintext token from crates.io, but cargo login would also request a password to encrypt the token for storage. Effectively, it's the same approach used by SSH keys. It will not protect you from a keylogger malware, but it would significantly reduce spread rate of the envisioned attack. Honestly, it's a bit weird that Cargo does not have such functionality from the beginning.

Ideally, Cargo should have something like TUF, but implementing it would be a sizable effort.

6 Likes

This is possible using credential helpers, which could also use something like a PIN-protected hardware token (e.g. YubiKey).

The problem is if your local machine is compromised, malware can wait around until you unlock the credential and steal it. But having them sitting around on disk in plaintext is trivially wormable.

+1 to TUF. I had a fork of a withoutboats RFC to add TUF-based index signing which perhaps I should dust off at some point, which would lay the groundwork for end-to-end verifiable package signing (and integrate nicely with Sigstore).

6 Likes

Note that Tobias, one of the foundations recent hires, is a member of the crates.io team. With the support of the foundation scoped tokens has made significant progress, and it was one of the important security ideas lost in "PR/RFC complete but not implemented". I very much look forward to additional progress being made on the other important issues.

The Cargo team is in a much healthier place than it was a year ago. We have a number of long-standing pain points finally making progress. Assuming volunteers availability doesn't suddenly dry up... (look at large tech companies laying off workers.)

Everyone agrees that these issues are important. The limiting reagent has always been maintainer time. And both teams have been working to figure out how to get more of that. The foundation has helped, AWS has helped, Huawei has helped, Microsoft has helped. If your company would like to help hire someone to contribute. Preferably, but not necessarily, someone who's contributed before. Or, work with foundation to pool resources to do the same.

16 Likes

Some of these ideas can be prototyped in parallel with the teams. If someone wanted to they could set up a service that periodically used https://crates.io/crates/crates-index-diff to see when new packages are published. It could then email users who subscribe to packages. This would not be as good as a system maintained by crates.io, but it would let people subscribe to crate publications.

1 Like

It would be useful to add support for running the build scripts and tests under bubblewrap of firejail or similar with restricted access outside the checkout. And a simple option to restrict access to the network; some projects need to avoid any ad-hoc downloads from the build scripts, and it would also stop any potential malware there from calling home.

3 Likes

I think 2FA and notifications are the only robust ways to mitigate worm risk.

+1 for that. It was mentioned previously that considerable effort had already been done in that regard, open RFCs and PRs. Can someone knowledgeable link them here and maybe give a overview where these two topics are standing, and what is missing to get them across the finishing line. Naively they sound like reasonably solvable problems even with limited resources.

If the worm has access to the account's full key/credentials I guess it could change the notification email address and/or TFA details?

Another reason to have (enforce?) scoped tokens.

Are these settings not gated by a full password/login requirement? That is, tokens can't change account details, can they? If they can…what's the purpose?

A good 2FA implementation will not allow change of 2FA settings without first confirming the second factor or getting a backup code. There is a risk that a user will store backup codes on the same machine, but at lest there isn't a well-known location or a helper tool to retrieve these. And if the codes are stored properly — offline, then this is robust.

A change of e-mail address could send a notification to the old e-mail address.

1 Like

Just a note, GitHub allows changing 2FA settings in the browser without use of 2FA, I was surprised when I learned about this and contacted their support. They answered that it's by design and when you are logged in with a browser session this is trusted.