Requiring 2FA to Publish to



An attacker who has compromised a rust developers token can publish malicious updates to This update would then be pulled down when consumers of the crate update.

Depending on the compromised crate(s) the impact of this could be RCE across a very large number of developers, as well as the ability to backdoor consumer code.

We have seen this attack in the wild against NPM.

In this case an attacker was able to compromise a single package, leveraged that to steal npm credentials, likely with the hope of further leveraging those credentials maliciously. It is believed at this point that the victim developer’s account was using a password that they reused from other sites - an extremely common problem.

This attack is practical - in terms of effort it is very similar to any other attack against passwords, but with a far greater impact (these accounts can be leveraged heavily).

I want to be clear that none of my suggestions would be consumer facing. You can not badge account 2FA state - that is very dangerous. This is about making users safer, not making them feel safer.


2FA On cargo publish

Implement 2FA for the cargo publish command.

Instead of cargo publish only utilizing a token, when the command is called you will need to auth with some out of band code.

edit: 2FA should also be required for token generation.

Make 2FA for new accounts mandatory

Coverage for 2FA is more important for this scenario than for a typical password authenticated service due to the ability to leverage the account so heavily against consumers.

Therefor, I propose that 2FA be made mandatory.

Rust is a small, but growing community. It will not necessarily be viable to migrate to mandatory 2FA later - starting the process now will make a significant difference going forward.


Signup workflow

Before a new access token can be generated the user will need to set up 2FA.

This should simplify things - new users are already onboarded into the new system, and they can’t get a publishing token until it’s set up.

cargo publish

Cargo publish goes from:

cargo publish

-> published


cargo publish

-> perform 2FA action

-> published

This should be a very slight inconvenience.

CI/ Automated pushes

For automated pushes a human being involved is not viable. I propose that a separate key be used for these systems, and then it is up to publishers to manage that key.

This is already fairly standard for automation tooling - signing binaries, passing tokens for other tools, etc, is all part of existing workflows.

I don’t want to go into details on what this would look like, but basically something like:

cargo publish --ci-key=<secret>

where secret is a generated token that signifies ‘I don’t need to 2fa’ to

Multi-phase rollout of mandatory 2FA

When 2FA is provided as an option, cargo will need to be updated to support it. When rollout coverage is sufficient, begin phasing out non 2FA enabled systems.

A large number of accounts will not have 2FA. When these accounts attempt to publish:

a) An increasing delay (sleep) should be added b) A warning should be printed

Eventually, a deadline will be specified where accounts not 2FA’ing will not be able to publish until 2FA is enabled.

This could take quite a while, but getting to this state is ideal.

If this state were achieved, an attacker could not publish malicious code on behalf of a publisher, without also compromising their second auth factor.


Opt Out 2FA

Instead of mandatory, we could have it be opt-out. As in, you must explicitly say “I do not want to use 2FA”.

I believe that, given this choice, most people would choose not to set up 2FA, opting to just get their code pushed, and maybe coming back later. This is almost as bad as opt-in.

Again, the impact needs to be considered here. When a user chooses not to protect a typical online account with 2FA that decision only impacts them. That is not the case here - the lack of 2FA impacts consumers.

If opt-out is considered, I highly recommend: a) Very seriously warning users against opting out b) Repeatedly reminding users to opt back in

Opt In 2FA

This is just a worse version of Opt Out. I would bet that users overwhelmingly do not opt into 2FA.

I do not have good data for this (I imagine companies are reluctant to publish), and this may not be the case for more technical users, but I expect 2FA adoption to be quite low.

Consider the attack against npm. That attacker could have gained access to many, many tokens. Opt-in would have been significantly less useful, as they could have probably found unprotected accounts.

It’s not bad, but it’s not good.

Rust is much, much smaller than NPM. The impact of mandatory enforcement will be considerably smaller. There is a great opportunity here to not just do a little bit better, but to do what every security team wishes they could do - enforce 2FA.

Open Questions

Account Lockouts, 2FA recovery/ management

2FA opens up the opportunity for an attacker to compromise an account, set up 2FA on it, and lock the user out. There has to be some recovery mechanism here - but I think that’s out of scope.

Impact on devs with slow internet, no phone

I think a fallback to email is a viable approach here?

2FA on other actions

It may make sense to 2FA revocation or other actions, I have not considered it.


This certainly should be available as an option.

I’m not sure if it should be on by default for all publishing. When someone publishes v0.0.1 of their toy project, it’s not a catastrophic risk for the ecosystem. Perhaps the requirement should kick in when a crate exceeds certain number of downloads or is depended on by other crates?


Is it possible for to know whether a user has enabled 2FA in GitHub? I know it’s possible to see this for org members, but I have no idea if that’s visible in the external auth API.


I think this is a good problem to tackle.

Apart from 2FA… what about some public-key tooling? Like a signing or something like that?


I think the default should be 2FA, and opting out should be carefully considered as a feature. Your v0.0.1 toy project is no different from someone else’s v1.0.0 project - they both have the exact same threat model for consumers.

I would strongly advocate for 2FA being mandatory, and I think anything less than default-on would make it not worth the time.

Also, I don’t know how to use this site and I just want to multi-quote people -_- how do I do this without posting like 3x to talk to 3 people?

@cuviper I don’t think can get that information, I doubt github exposes such a thing, it seems very unsafe to do so.

@felix91gr That’s a way bigger undertaking, and has a slightly different threat model I imagine. I think 2FA is ideal here because it is so much simpler to implement.


If you highlight text and click the floating “Quote” button to reply, you can do this several times in one post. I don’t know of a way without quoting, but I’ll demonstrate:

This kind of thing also came up in this thread:




FWIW, I don’t believe that rediscussing signing will be very productive. The benefit of 2FA is it should be pretty simple to implement, and effective against the attack we’ve seen.


I don’t really understand what 2FA means in the context of cargo publish.

The way the flow works today:

A user can create or log into a account using a GitHub OAuth. This flow is entirely delegated to GitHub, its possible that a user’s GitHub account configuration could require a 2FA at this point, though I don’t know if that’s the case.

Once a user has an account and is logged in, they can generate a token which they can pass to cargo login. Once that command is run, that token is stored in that user’s home directory on that machine. That token is served to when performing some (all?) registry operations, and used by to associate the request with a user account and check permissions.

Where do you imagine a 2FA being introduced by cargo login? cargo publish? Maybe there’s a way we could make the GitHub OAuth require a 2FA? The proposal is not clear to me.


I would guess that what we want is to set some explicit boundary which elevated privilages are requires to get past. This boundary could be at

  • login, maybe with some switch on the side that enforces the backing GH account having 2fa. IIRC though, I’ve never had to tap my yubikey to get into even though I have it set up on GH.
  • cargo login, where in addition to the token I tap my yubikey. This guards against leaving yourself logged into
  • cargo publish, where I have to tap my yubikey for every write. This guards against having to invalidate the token after a publishing session.

Bonus points if there’s a switch on where I can set if I want to tap my yubikey at login time or at publish time; some people are more paranoid than others. It’s analogous to how some people let their machine’s sudo powers expire in the standard 10 minute window, and some people sudo -k as soon as they’re done.

  • When publishing new code. Every time cargo publish is run.
  • When adding or removing owners. cargo owner and actions on the website (because otherwise attacker could add a new owner with 2FA they control, and publish code).

The threat here is that a virus running on developer’s machine can easily steal Cargo’s CLI login token and cookies. The goal is to make these easy-to-steal tokens not sufficient for distribution of new code to users of the developer’s crates.

GitHub’s 2FA here is a red herring. It can’t be used, and it’s not sufficient to protect CLI login tokens.

Imagine worst case scenario of a worm:

  1. Developer of some crate gets infected with malware. The malware steals ~/.cargo/credentials.
  2. The token from the credentials file is used to publish new versions of all of developer’s crates with a new that contains the credential-stealing malware.
  3. Other developers run cargo update; cargo build, and end up running the new infected
  4. GOTO 1

Such worm could quickly spread throughout the ecosystem.


I argue that the actual threat here is very different.

  • First release by definition is not already installed on other people’s machines. It’s not trusted by anyone yet. Infection is slow, since it requires intentional, manual action by users, and users are aware of what they’re installing.

  • Update of an existing project that is used as dependency by others gets pulled automatically on cargo update, and new builds or installs of crates that depend on the infected project.

If I publish new-toy-project 0.0.1 now, almost nobody will be affected. If I publish infected libc 0.2.43 now, the whole Rust ecosystem will crumble.

The mechanism may be the same, but the impact is very different.


I’m also surprised by this claim. Anyone can upload to, is my v0.0.1 malware no different from sone else’s v1.0.0 project? Users need to make individual trust decisions about the crates they use; if they want to depend only on crates by authors who execute certain security practices, that’s their decision to make.


On cargo publish. The assumption here is that the token has been compromised (as in the attack disclosed today).

You would run cargo publish, you would 2FA, the publish would succeed.

Maybe other APIs? I have not looked at cargo’s API - it may require coverage elsewhere. Publish seems sufficient, offhand.

I agree with you that scope is different, but I don’t think that’s very compelling.

Yes, absolutely. These are two completely different attackers, and two completely different attacks.

One is social engineering a developer into downloading malware. One is attacking an existing package, which you may have already vetted, or may be from someone from trust, etc. Totally different.

But this is very ‘in the weeds’ - is the point that we should not make it mandatory because it will add friction to first publishes? The friction will exist with all publishes, regardless of version. The first time, they will have to set up 2FA - that is a shame, but an extremely common workflow.

Cargo can significantly impact these security practices.


I think this would be a useful opt in feature with maybe some way to visualize 2fa verified artifacts in crate.ios ui.

I would strongly think about the implications of making it the default/a requirement.

Sonotype is one example of a relatively secure system that has a notoriously high barrier of entry for someone that just wants to share their work with the world. 2fa is a step forward from explaining pgp keys to someone new to software development but it’s still a notable barrier of entry.

Consider the impact on automation. Continuous delivery and deployment a requirement for some. Automating these processes is still useful even when not required. 2fa complicates the ability to automate these processes.


I’m also not entirely clear what’s being proposed here. But opt-in security features sound great. Exposing what crates use those security features is probably a good thing.

It’d be especially cool if, whenever one of your crates got X downloads or Y transitive dependents, sent you a congratulatory email that also linked to things like “how to set up 2FA” and Rust Bus and whatever else the community feels that “successful” crate authors ought to know about and consider using.

The big risk is in demonizing* crates that don’t use 2FA such that we end up harming the ecosystem instead of making it more secure. For example:

  • Crate foo uses 2FA. Some company decides they only want “secure”/2FA-using dependencies. foo adds a dependency on bar that does not use 2FA. foo gets a complaint because suddenly this is considered a breaking change.

  • Crate abc decides they only want “secure” dependencies. But they really want to use def. So abc decides to copy paste all of def's code into a new abc-def crate that they can 2FA-publish. Or worse, “vendor” def's code by copy-pasting it directly into abc.

  • actually does enable this by default. Many crates decide 2FA isn’t worth the hassle, and those crates remain github-only. Then they get a complaint that they aren’t on, or someone forks/vendors them just to work around that…

Any security proposal that makes these scenarios plausible would be causing more harm than good in my opinion. So until we get to the point where 2FA is as easy for people to deal with as making a Github/ account in the first place, I’d rather not require it.

*I use this particular word because I see significant parallels to this old thread and I happen to agree with nrc’s response to it


I called automation out in my initial post - coming up with a viable workflow is, of course, necessary. I proposed an off-the-cuff solution to this, where you have an ‘opt out’ key that lives on CI.

I don’t think this feature is really worth implementing opt-in. I wouldn’t be against it, but I wouldn’t care or consider it a win.

Opt out would be acceptable, though I’d want something like a ‘reminder’ for the devs to reenable it.

I dislike the idea of relying on crate badges for this (edit: as in, badges are not even a viable option as they expose sensitive account information). It should just be safer.

Which part?

When you call ‘cargo publish’ it requires a second auth.

This directly prevents the issue of stolen tokens. We have seen this issue today with npm.

Opt in would have been mostly useless in this situation (the attack I linked). Enough people would not have done it, and the scope of the attack was large enough (an extremely popular package was compromised), that the attacker likely would have had credentials for a non MFA’d account.

It would be useful to have it for an attacker who only managed to compromise one, or a small number, of tokens. However, I still feel that opt-out is the clear choice.

2FA is an extremely common practice at this point. Anyone pushing code to is very, very likely to have interacted with it at some point. I do not feel that it adds significant friction to someone adding their first rust crate. I feel that it adds a completely manageable amount of friction to a CI build (having CI secrets is a completely supported, well established concept).

2FA is a lightweight approach, does not require a ton of complexity, has low impact on standard workflows, and could be rolled out incrementally.

So, to address your points individually…

I do not feel that ‘badging’ is the way to go anyways. I feel that opt-out or mandatory is the better approach here. I strongly feel that it should never be public whether someone’s account has 2FA on it - that would be a disaster, as attackers can cut down to exactly who to target. That is simply not viable.

I address this bit above - no tags, no badges. They make us less safe.

This is acceptable. For one thing, I strongly feel that an extreme minority of developers would reject because it requires 2FA. This is simply not realistic to me - 2FA is extremely common, especially in a developers life. Second, for those who choose not to use 2FA, good. I hope they get forked and the new maintainer uses 2FA. This is standard for other security issues.

The only scenario that is even possible is the third and I really do not feel that you’ve justified the scenario being either likely or a real problem.


To reiterate, if 2FA were mandatory, rust would be immune to the attack that took place today. There is no debate about that.

So the question is - is the impact of 2FA worth being immune to this attack?

The impact of this attack could be absolutely massive (RCE across a massive number of rust users)

The attack is realistic, clearly something attackers are investing in

2FA seems like a very good fit for this.

Off the top of my head, I do have some questions - for example, dealing with the impact on users who do not have phones or have very poor internet connections. But I don’t think these are significant barriers.


Actually, I have no idea if this is true. I haven’t seen any postmortem which explains how the attacker managed to publish a new version of eslint-scope, its entirely possible they compromised a user’s second factor as a part of this attack.


I think the security and integrity of is an incredibly important thing for the Rust to be successful. Whether or not 2FA is the way forward or part of that is surely worthy of discussion.