Security fence for crates

Absolutely, and that's something I'd be interested in participating in.

Something else that's probably worth pointing out in this thread is RustSec and the cargo audit subcommand:

RustSec provides a crate vulnerability database and the cargo audit command for checking projects' dependencies (and their transitive dependencies) for vulnerabilities in the database.

It's designed to be easy to run in CI.


Here’s one simple true-story example, redacting names because of NDAs of course:

The company got serious heat from a client because a fairly serious web application vulnerability was not found by our company, and was turned up by a competitor. The vulnerability was that the web app would rely on the user’s browser to store the account login verification details. Meaning that, after secure login, the browser would store in a hidden field which account number (in this case the cell phone number of the user), was linked to the account. Because of this, a malicious user could intercept the browser traffic (using Burp suite for example), and change a generated request for topup, connection settings, and other things, targeting any and all cellphone numbers in the whole database (several million).

Turns out the vulnerability was discovered in by our security consultant, but in another page than the one where the remaining vulnerability was found. The coder had fixed that page, but did not think to go back and check all the places where that code was re-used and fix that as well.

Had the coder

  1. read any of the other (many) security reports our company sent to theirs that showed similar kinds of exploits (trusting the browser).

  2. Done a proper review process to ensure the library and any copied code reuse is fixed

  3. Remembered this incident and apply it in the future, and teach it to others The problem would not have occurred and would not occur again.


  • Point number 3. does not occur in most companies.

    1. only exists in some companies, and if so with some grumbling because many don’t really know why .
    1. Well…yea. Thats why we need guidelines and a list of low and a list of medium security crates. Low is for if you don’t want to follow the guidelines or want to experiment. Medium is for when adoption grows and you want to reach a wider audience with your crate - but then you’ll need to read the guidelines.

It seems we may still be misunderstanding each other. I will briefly restate my position:

Should we pick the low-hanging fruit for improving crate security through technical controls? Absolutely! Will this make a difference? Almost certainly!

Is there an “easy” solution to social attacks, malicious maintainers, etc? No, there definitely isn’t.


Totally agreed. It costs a company around $300k for the full-package protection from the security company I mentioned. It would be great to see a business around this for the cases that the open-source community is unlikely to ever cover (specific to Rust).

I will say though that, with some reasonable effort, the underlying tools for a powerful protection mechanism can exist and come built-in. Right now, I don't see the low-hanging fruit being suitably addressed.


That's rather useful. Now how to make this standard practice... can it be incorporated into the compiler directly and by default?

I'd love to get it merged upstream into Cargo. It grew out of a proposal for adding that sort of feature to Cargo originally:

1 Like

I agree that cargo, and not rustc, is the right place for a crate-auditing tool.


:face_with_monocle: Ok, I guess.

Forward-looking statement: But, I'm keen on making security a default thing, so that you can turn it off if you don't want it, but that new users start with it out-of-the-box. I realise this is a bit of a pivot with regard to how much it could change things (Crates things), but Rust is security-conscious originally from the base up. And, it would probably stay on nightly for some time before merging as part of an edition.

1 Like

There’s a reason why Windows auto-updates after all, even though some people don’t like it. It’s because, by lengthy trail-and-error, they just had to make it so to ensure security patches get through timeously. It’s going to be pointless head-against-wall type bashing if the security work is not on by default, because this type of thing needs numbers to actually work.

Again, this will take much discussing to do, and not every security feature needs to be on by default, but The Book should have it in there, and at least std-lib and some common crates should be Medium level security (so it must be scanned with RustSec audit tool and use TUF, and some other things).

1 Like

If a new user is not using cargo, they are doing something wrong.

I'm not understanding you entirely. So if we say "merged upstream into Cargo", we mean that anything that goes into Cargo, or any updates to crates in Cargo, will get automatically scanned by default?

We merely mean that the tool for dealing with crate-level logic is cargo, not rustc. This is entirely independent of whether the behavior is on by default.

We are not talking about, the default crate repository.

That'd be the idea, yes.


Yep, check this out:

But I want to address the default behaviour as well. I just think that merging it and then not turning it on by default is, quite frankly, wasting a large part of the implementation effort and is a large opportunity cost in lost security. While one change does not solve everything, a cumulation of such default-on changes will give a reasonable defence-in-depth solution.

1 Like

One of the thing, that I think it missing here is the idea to run all crates in secure-sandboxed environment by default, with no network access and no file system access. This will make sure that all crates are do what supposed to do, for example: linter lib should be just read and write and cannot access any network.

This is a good idea. It combines something like the permission system of Android with the behavioural analysis done on malware in the latest AV solutions. Here's an implementation plan, or proof-of-concept:

a) Create a system that allows for analyzing a crate based on 1) The dependencies that are included 2) the functions that it has permissions to perform.

b) separate the permission-list owner from the crate owner, perhaps using double review on approval for the permission-list only.

c)Add guidelines to API-guidelines to add a fuzzing test for the crate to the RustSec auditor repo (create a system for this) that calls all the functions and procedures, traits etc - such that all the code in that crate is forced to run when the fuzz test is called. This will naturally end up catching code bugs as well, because all security issues are in varying degrees code bugs. So, to kill two birds with one stone, the test should target the confirming the expected functionality of the crate.

d) As the fuzz test runs, the RustSec auditor checks to see if the permissions list of the crate is being honored - i.e no network access, or no access to crypto libraries, - but this must be a whitelist because a blacklist just doesn't work so well. i.e allow permission for crates tagged x,y,z (that and only that). If the permission is violated with a call to somewhere else, then the audit fails and the permission list needs to be updated by the relevant owners, or the the functionality must be taken out and added to a crate that is more suitable for it.

e) The permission list becomes a description of the functionality of the crate, so the tags need to have the correct (to be discussed) level of granularity to not over or under specify the permissions or functionality, in other words not be a hinderance.

f) A crate that calls a dependency will not inherit the full scope of that dependency's permissions. Instead one of several means can be used: using the crate needs to be accompanied by the permission tag that is being accessed by that crate. In other words, I have a crate with tags a,b,c, I use a crate with tags c,d,e, but I only intend to use functionality c and e. I place the tags c,e next to the import of the crate, and the auditor will see that the usage is enlarging the permission set of my crate to a,b,c,e. The permission list will get updated and the change approval requested on the Merge. When the auditor runs, and my crate now also runs syscalls to functionality d, the test fails and the issue is escalated to the owners to check what happened. Finer implementation details should be relegated to actual code implementation.

e) I am vague on the implementation details for the underlying functionality flagging for the analysis. @bascule, do you (or anyone else), know whether the syscalls that the code generates can be reasonably tagged (like calls to the windows cryptographic libraries are tagged "Crypto"). And, at which level is this best implemented? That is, for building the "sandbox" environment that will catch calls outside of the permissions. We can and perhaps should use work others have done : to help with this. I will contact an expert on behavioural malware analysis to weigh in on this as well. He has done it on several levels in the OS stack - but using compiled code. So we can have the auditor compile the fuzz test and run it on Cuckoo - that will definitely turn up malware behaviour with reasonable (perhaps 80%) probability, but I'm wondering if this can/should be implemented higher up in the compiler process.

Furthermore, much of this can already be coded into a proof-of-concept I think.

This sounds like an object capability model (a.k.a. OCap), which several people have lamented Rust lacks. This is something I think would be interesting (albeit very difficult) to retrofit into Rust, particularly if Rust's type system could be extended to reason about capabilities (there is ample research in this area).

Some examples of things using object capabilities successfully today are the Pony language and seL4 microkernel.

1 Like

I'm sceptical, this sounds like it's going to blow the whole embedded thing to dust. Or maybe I'm wrong? Can the tag-based system mentioned possibly give a subset of the same solutions for less effort?

1 Like

Software object capabilities / tags don’t work when you’re allowed to execute arbitrary code (which you can with unsafe). This was also touched on in the parallel user forum thread. Furthermore, software object capabilities / tags do have a run-time cost so it’s maybe not great to do by default.

If we are happy verifying only crates without unsafe { .. } blocks, then this can be done purely in the compiler I think, without any run-time cost. This would be great.

edit: didn’t mean to sound sarcastic

The quest in this thread, and in the parallel user forum thread, is to improve the security of the Rust ecosystem, and in particular The challenge is to surface and flesh out proposals that might work. If a proposal is infeasible, please simply point out why and leave it at that. Sarcasm and other forms of antagonism toward the proposer are inappropriate.


The idea is to try to inherit the Android fine-grained permission system as pyc suggested.

These proposals are mostly at the level of ad-hoc for Rust, and if they turn out to be useful, can turn out to be nifty for other purposes. But, I'm going to see if I can put some code together that will simulate some of the behaviour discussed. (it's probably going to be in python). Then one can poke at it in the simulation. I hope this simulation will be accurately true-to-life, to be continued.