About supply-chain attacks

(Copying some stuff from the optional namespaces thread)

There's been some recent discussion about a Medium post by a security researcher about the supply-chain attacks he performed on NPM, PyPI and RubyGems.

While the specific attack isn't applicable to Cargo, I think it is a good illustration of the problem with vulnerabilities based on supply-chain attacks, typosquatting in particular: they accumulate quietly for a while, because attacks aren't really feasible in small ecosystem; and when attacks do happen, the ecosystem has become too big to change course quickly.

In this case, the researcher made over 130.000$ in less than 6 months, from bug bounties alone. Given the high-profile nature of the targets, one shudders to imagine how much he could have made by selling these vulnerabilities to malicious actors; or how many similar attacks exist undetected in the wild, camouflaged as bugs.

Granted, the supply-chain attacks Cargo is vulnerable to are subtler, and harder to exploit. But they very much exist, and the only thing keeping them from being exploited is that Rust isn't used on the same scale Node or Ruby is. This will change sooner than later.

Rust seriously needs to have a stronger supply-chain-security story. Part of that story would be better tools to controls capabilities given to dependencies (eg forbid unsafe code in dependencies, forbid arbitrary system calls or filesystem access in dependencies, WASI-style); and a large part of it would be strong defenses against typo-squatting.

After I posted this, people offered a few suggestions to mitigate these attacks. I don't think any of them really address the root of the problem:

  • Hamming distance covers some, but not all typo-squatting attacks (for instance, foobar_async as a squat of async_foobar), and has false positives.
  • TUF is opt-in, and doesn't cover supply-chain attacks on open-source dependencies.
  • cargo-supply-chain is opt-in and only gives coarse information about package security.
  • cargo-crev is opt-in, and requires a manual review process (more on this later).

I'm not sure how to communicate this, but I think that Cargo needs to form a threat model, and these suggestions don't really have one. Some things to consider:

  • The threat model must assume that code can come from anybody, and libraries that accept code from unvetted strangers will outcompete libraries that only accept code after a rigorous vetting process (eg, I'm currently contributing to Raph Levien's druid; for all Raph knows, I'm a DGSE agent planted to introduce vulnerabilities in his code; Raph has done none of the thorough background checks that would be needed to prove this isn't the case; yet he's still taking my PRs).
  • The threat model must assume that people will be as lazy as they can afford to be when pulling dependencies. If people have a choice between a peer-reviewed dependency without the feature they need, and an unreviewed dependency with the feature, they will take the latter.
  • The threat model must assume that both attackers and legitimate developers can write code faster than people can review it.
  • The threat model must assume that some attackers will be sneaky and determined; if the ecosystem defends against supply chain attacks with heuristics, they will learn to game these heuristics. If the ecosystem only checks new crates for suspicious code, they will write non-suspicious code at first and add the actual vulnerability months later.

We need a zero-trust model, that starts from the assumption that any dependency code is untrusted, and works from there. That means a capability model, where we have the ability to restrict dependencies from performing broad types of actions (opening sockets, reading arbitrary memory, etc).

Just being able to set a cargo option to transitively forbid unsafe code would be helpful; because it would mean crate maintainers would be more likely to get "I can't use that crate because I have a transitive-forbid-unsafe cargo config, please fix" types of issues.


Some previous discussion of ideas along these lines:

My take on this overall idea:


Regarding this specifically:

Ideally TUF would be integrated with crates.io and cargo out-of-the-box ala PEP 480 -- Surviving a Compromise of PyPI, which after many years of work is currently being deployed.

TUF has a related project created by some of the same people called in-toto which is specifically concerned with software supply chain attacks (see also some recent media coverage).

1 Like

This feels like the old Smurf counting joke, also known as the "xkcd 927 problem".


I have another catastrophic scenario to worry about: a worm that automatically infects crates.io ecosystem:

  1. Developer builds or runs an infected crate
  2. The malware crate finds which crates belong to the developer, modifies them to be malicious, and uses stolen ~/.cargo/credentials or unprotected cargo publish to publish infected crates as the developer.
  3. After a few infections it hits a developer of a popular crate on crates.io that has tens of thousands of users.
  4. Within hours every crate on crates.io is full of viruses, all Rust users are pwned, and all Rust projects have to be considered compromised and dangerous.

~/.cargo/credentials sits unprotected in plain text, and cargo publish doesn't require any 2FA like TOTP or FIDO key, so developers' machines are incredibly at risk of immediately and automatically spreading viruses to other Cargo users.


Regarding forbidding unsafe, I think it's commonly overestimated how much it would help, and underestimated how difficult it is to implement and how damaging it would be to the ecosystem.

Rust would have to create a new, much bigger concept of "unsafe", because:

  • std::process::Command is safe
  • #[no_mangle] is safe
  • #[link_section] is safe
  • #[export_name] is safe
  • #[link(…)] extern "C" {} is safe
  • println!("cargo:rustc-link-lib=native=…") is safe
  • proc-macros are safe, and turing-complete, and can inject all of the above in a way that evades static analyzers.

You can forbid all of that, but you lose a ton of Rust features, and the systems part of the systems programming language. And that is still not enough, because:

  • It creates a new, difficult threat model for every other crate. Now crates on allowlists that are permitted to use "big-unsafe" features have to treat their public API as a security boundary, and protect not just from memory safety invariants, but against all code that could abuse the API to gain access to the file system, network, or other data in the program through a blessed crate.

    • There are also possible subtly-evil implementations of iterators, operator overloading and Deref that are themselves safe, but can be used to confuse and exploit other less-than-perfect code.
  • Rust is not a sandbox language. The language, the compiler, linker, libc, LLVM, the whole toolchain has not been designed to deal with untrusted code coming from the inside, and is not equipped to deal with it in a robust way. If you declare there's a security boundary inside Rust crates, this boundary will be broken and exploited over and over again. It will be a false sense of security, and it will be a faucet dripping with Rust CVEs.

  • Crates don't need elevated privileges to do malicious things. A hashing library can be a backdoor for passwords. A regex or other string/path manipulation library can trick your program into generating dangerous paths/urls/markup. Parsers/serializers can inject arbitrary data into your files.


While I agree, I don't think the listed attributes are "safe" by Rust definition of safety. That is to say they are comfortably covered by Rust's concept of unsafe but we lack an unsafe key word for attributes so they can't be annotated as such.


6 posts were split to a new topic: Securing cargo publishing credentials

I would also love to see progress on crates.io token scopes by pietroalbini · Pull Request #2947 · rust-lang/rfcs · GitHub.

1 Like

To add to this, the history of sandbox languages is not great. Java attempted to be a sandbox language with its Applet model, and despite being designed from the ground up with sandboxing in mind, they were still subject to a never ending string of vulnerabilities until they eventually gave up and told browsers to stop allowing applets entirely.


An optimizing JIT for a dynamic language like Java necessarily has to speculatively optimize and de-optimize as necessary. It also requires a garbage collector. Something going wrong with these things (for example a missed reference to an object or confusion about the type of an object) is as far as I know the biggest source of exploitable bugs for these cases. A wasm JIT engine however only needs to compile every function once at startup. Wasm is also a lot simpler than Java bytecode. For example it doesn't have any objects that can be garbage collected or for which the type may not be known statically. Instead all a wasm module has access to are locals and the wasm stack, accesses to both of which are easily verifiable while compiling. In addition it has tables, which can easily be bound checked and a linear memory which can be implemented by just reserving 4GB of memory. Because wasm pointers are 32bit, it is impossible to go out-of-bounds in that case. Pretty much the only dynamism in wasm are indirect function calls, but there it is also relatively easy to verify that the function signature is correct. This makes it so much easier to sandbox it than Java or worse Javascript.

My understanding is that most of the security bugs in Java aren't related to bugs in the JIT or garbage collector, but to permissions hole in Java's SecurityManager system of permissions.

1 Like

Currently rustc has 66 soundness issues open, 212 closed. If Rust tried to have a sandbox boundary within the language and promise to contain untrusted Rust source code, these would have to be treated not as annoying edge cases, but as security vulnerabilities.

Imagine you have a fortress. If you declare that people inside the fortress are trusted, and outside are not, then you can focus on keeping attackers out, vetting who can come in, and you have strong walls and moat to help you. But if you say you want to be able to let anyone in and be safe from people outside and people inside, then you don't have a fortress, but a pile of bricks in a pond.


Yeah, that's a problem.

I wonder if the language could evolve to fix that. A lot of the soundness problems are from library errors from std. We might be able to reduce them with RustBelt. Similarly, we might be able to fix a lot of backend errors with Cranelift (which is already running as a wasm backend, so handling untrusted input isn't outside its scope).

Also, with something like SealedRust we could provide a subset of features that are guaranteed to be safe, for companies with safety-critical needs.

1 Like

I think that under @PoignardAzur's threat model there will always be cases where bad actors can sneak through (I can't remember the name of the proof; it was related to the halting problem but was proven earlier than it... and I just saw it a few weeks ago :tired_face:!)

If you're willing to accept this claim as true, then the next question is what are practical things that can be done to reduce the ability of an attacker (including an automated one like what @kornel described) to successfully mount an attack. Two factor authentication sounds like a relatively cheap and quick fix, with minimal overhead for developers. Does crates.io scan for viruses/worms in an ongoing basis? Or perform other kinds of defensive intelligence?

I know that those methods are at best heuristics that an intelligent and determined attacker can find a way through, but if the cost is continuously raised high enough, eventually there comes a point where the benefit of the attack is less than the cost of the attack, at which points attacks will tend to abate.