Security fence for crates

Continuing the discussion from Requiring 2FA to Publish to Crates.io:

This here is about the broader context of security of crates. That includes talk about the guidelines to ensure some (as in better than nothing) level of security for crates, and how to incentivise usage of secure creates without stifling the creation of new crates.

This ties in with managing crate creation. A sufficient incentive and mechanism to reduce the number of duplicate crates go some way to towards easing the additional workload for maintaining security.

3 Likes

Ooo, goody. Sounds fun :grin:

The security guidelines would feed into the api-guidelines of Rust. The reasoning for a more stringent process, relative to a completely open process is something like this:

Except that, in addition to not finding a crate with the right feature, I cannot find a crate that as that feature that is secure in any vague sense of the word (as in anyone can push an update to it freely).

@bgeron, Adding some security to crates should not prevent innovation, but if it does a small bit, I would say the priority should be for security, then productivity. Why??? Because all that the Windows maintenance teams do is work on security patches night and day. If you aim for getting the product out the door as fast as possible, Rust probably would not be the language to use in any case. Rust is probably better at reducing code maintenance headaches after deployment, and since security holes add a lot to that headache, it should be prioritised here (reasonably of course).

I

3 Likes

There is a really good thread on the Rust subreddit in which someone describes their process of auditing some popular crates for vulnerabilities, finding a few panics that could be used for denial of service, and the experience of providing fixes.

One of the points he brought up is that many maintainers don’t seem to prioritize denial of service issues, although those can be serious issues in the real world. Also, there isn’t much support for getting security notifications out to downstream projects; warning during builds about out of date crates with vulnerabilities, or the like.

I think it would probably be a good idea to set up a security team, and ask maintainers of popular crates to add the security team as co-maintainers. Much like @kornel describes about Linux distros, and @bgeron describes about Haskell and Stackage, this would give you a single team with a consistent set of policies to contact about security issues, who could potentially respond more quickly than a single maintainer, and who could help spread out the effort of filing CVEs, maintaining backports, and notifying downstream projects.

Many of my suggestions have already been quoted above, but if anyone has any questions on them I can flesh them out.

2 Likes

That doesn't seem to be the case. Only the crate owner using their crates.io token that they were issued after authenticating and registering via their GitHub login can push an update to a crate. Now, if any of that is compromised, then, yes, "anyone" could push an update, but, it doesn't seem like the status-quo is quite so bad as "anyone can push an update to it freely".

That being said, I believe your point stands that crates should have more security as far as both authentication (2FA) and integrity (signing).

1 Like

I would be for reducing/eliminating the cool-off time if the code being pushed came from long-time trusted contributors, and those contributors have mitigations in place on their account for preventing their account getting hacked.

If this can work in practice (it does for the compiler code), then group-led crates could help in several ways to tighten the security. The idea is that the additional security overhead gets absorbed by a reduction in code duplication in similar crates. For one, it allows for more than one reviewer to be available for code review (vs single-owner crates).

I dunno. As far as I understand it, I can write a bunch of fixes and include my malicious code there somewhere and push it to a very busy owner. The owner might (for whatever reason) accept the merge and push the code out. There exist several social-engineering ways to increase the likelihood of such a thing happening. Or am I getting this wrong?

Except that will not stop this, or will it?:

These threats are discussed in the whitepaper for The Update Framework:

TUF proposes security through multiple signing roles, such that the authority of any given role is scoped in a least authority/least privilege manner. The result is repository data ("targets" in TUF nomenclature, or crates in Rust) and metadata which is end-to-end cryptographically verifiable.

There's an open issue about adding TUF to crates.io here:

I just posted some thoughts on how TUF can be layered onto the package signing proposals made by @withoutboats:

Finally, there's a Rust implementation of TUF which is being actively developed:

3 Likes

TUF is awesome. It helps with the problem of a MiTM preventing someone from updating.

It doesn’t do jack against a PR contributor social-engineering a deliberately-crafted flaw into a popular crate.

3 Likes

Cool, thanks. Do you think it’s a good idea to try to push for a Security workgroup in 2019? Also, this here could be added to the API-guidelines. Also perhaps, in some way, to the programming language book. The idea would be to give a summary of these security elements to a rust beginner such that they can appreciate the security work in Rust.

Some things a Rust beginner (or just reader of the Rust programming book) should know is:

  1. Why mismanaging memory is a security flaw and how the borrow-checker helps prevent it.
  2. why variables are constants by default ( I understand it as limiting malicious memory overwrites). Also helps explain why not to use mut everywhere.
  3. How crates goes further to not only manage packages, but ensures a level of secure use through the TUF implementation.
2 Likes

No, I don't think it will (or can) stop that. The only reasonable thing to mitigate that is not upgrading to newer versions without auditing. That is the same of any distribution method, including commercially purchased software. In fact, closed-source is much worse for this as there is 0 opportunity for such an audit in most cases. At least the open-source world makes available the possibility, in public. But, at the end-of-the-day, only an audit of a newly pushed version (perhaps with some automation for looking for certain things and pointing them out) can give confidence that a "rogue actor" has not compromised the software or any of its dependencies.

1 Like

For that, we need the group-work crate creation thing I think. And the security guidelines plus a way to send security alerts down the chain if a code flaw was found so that anyone who used the dependency would have some way of being notified about it.

Edit:

Like this. :grinning:

I think there are more ways to stop this type of malicious code insertion than auditing. I’ll just repost here a previous part of a post to explain:

(High) safe from most government attacks

(Medium) safe from experienced hackers working mostly alone

(low) safe from script kiddies.

Audits can fall under the High level. Medium level is reached by using cool-off periods for updates, so that the hacker in question can only carry out his attack after some period of time (helps reduce the attack surface but not eliminate it of course).

Another way to prevent said hacker is to have code reviews randomised - sent to one of several parties before being accepted. That way, the hacker does not know which account to target to get the code approved necessarily.

2 Likes

This will help to catch the malicious code that was inserted, because the hacker needs to target a function call that is used somewhat often, and hence needs to stay a bit from the actual fixes presumably.

This will work because, given the fact that the hacker will need to add malicious code to that has a reasonable chance of being used, the more common packages are a larger target. The common packages should also have more long-time and trusted contributors, such that a random new contributor should automatically raise flags (presumably it does not at the moment).

That's a good point.

EDIT: Thinking about this more I can think of a possible way this might work:

  • cargo will not automatically upgrade (ever) to a newly pushed crate version
  • a "newly pushed crate version" is defined as a new version of a crate that has not been approved by at least some percentage of dependent crates' maintainers (say 30% or 50% or something like that)
  • whenever "cargo build" is done, if there is a newer, semver compatible version of a crate that hasn't met the approval threshold, cargo will give an error and say that a newer, unapproved version is available and request that the owner of the crate being built perform and audit (or otherwise based on trust of the maintainer of the crate under discussion) approve the new version of the crate on crates.io (with comment perhaps)
  • once sufficient dependent crates' maintainers "approve" the new version of the depended upon crate, it's status is upgraded and no further approvals are required for cargo to automatically use it

Now, this is very much an idea that would require a lot of further refinement, but, I think something along these lines would be better than a strictly time-based approach.

The important point is that "Approval" does not necessarily mean "Full Code Audit". What it means is that a sufficient % of the community of users that depend upon the crate have decided that they either trust the author enough or have given the code change enough of a review that they themselves feel comfortable using it for their purposes. This metric could even be weighted by how many 2nd or 3rd generation dependents there are of the crate that the approver owns that is dependent upon the crate awaiting approval.

1 Like

Additionally, it can be plausible to predict how exactly a code insertion would occur.

For instance, I think that one way would be to target a common dependency with a small payload to jump execution to another place in the code where the main payload resides. This would help to minimise discovery by reducing the code size placed in a more visible location.

The security key solution sounds good from @bascule , but key revocation should also include revoking the malicious code in question rapidly and alerting others of the need to update crates. (Did not read beyond the abstract yet though)

One particularly nasty way to do code insertion is to open a connection to a remote server from where the payload code is downloaded and executed. I’ve used this method in a proof-of-concept attack to do a keylogger. It has some limitations like needing to use the http port to evade firewalls.

Only works in dependencies that are supposed to do that type of thing.- so a list that contains the likely POCs added to a api-guideline security section can help in catching ‘funny things’.

Thanks for your reply,

I'm going to state, for the context of this discussion, that I have some level of university certification for security principles, especially dealing with cryptography, as well as reasonable knowledge of mathematically proving the security of quantum cryptography systems. I can tell you that not even AES-256 symmetric encryption is provably secure. Thus full static analysis is not necessary and is not used in the field to define secure systems.

Instead, one moves from provably secure theories (in the case of quantum cryptography certainly), to bounded security. Meaning that, you stay as close to theoretically secure, and then make a small deviation from that so that the system is actually implementable (real-world practical), and then you prove that the deviation is bounded (reasonably small).

[quote="BatmanAoD, post:41, topic:18860"]

I mean that in the context of my computer security work for a security company. I can expand on that:

I've seen, through speaking to security consultants and my own experience, that a lot of security holes are, in fact, kinda easy to mitigate. The problem is that security principles are not usually taught at university or elsewhere, making people , by analogy, leave the front door open, or not lock the front door, because they cannot see the front door for not knowing where to look. That is why one of my suggestions (in this thread at least), is to modify the rust programming book to include a primer on this topic for newcomers to be exposed to real-life examples of how things go wrong and how it could be prevented.

3 Likes