Proposal: Security Working Group


#81

Not sure how much of it is currently implemented. I suppose I meant those would be implementations related to rustc. It remains to be seen how high the priority of memory protection features should be. The problem is that embedded does not, and likely will not, support ASLR or DEP or many other things. Same goes for IoT. This was an earlier post on some of it:

It’s certainly possible to program with 100% memory safety in C/C++, the same way that it’s possible to do a skyscraper tightrope walk. It’s another question whether one should attempt that. Proper Unsafe Coding Guidelines can ensure that the tightrope comes with a handrail in the case of Rust though. Also, unsafe marks where exactly code auditing should be focused and makes it easier to audit for that reason. Lastly, targeted testing and fuzzing of crates does allow for automated discovery of memory problems in the code. If unsafe does not occur too many times in the crate, and with enough added work on these tools, one can be reasonably assured that test-fuzzing will hit all the possible code paths.

If everyone had read and remembered (https://www.amazon.com/Art-Software-Security-Assessment-Vulnerabilities/dp/0321444426), C/C++ would be much safer. Problem is, it takes two months+ of full time reading and practising to meaningfully work through those books. ( I tried doing it in 3 weeks - didn’t get far enough). So that means almost no-one in industry who isn’t paid for that sort of thing is going to work through it. That means that unsafe should be a last resort - and crates.io should detect and flag indiscriminate use of unsafe automatically to allow correcting this code early on. The idea is that rustc+cargo should keep programmers from accidentally walking into the ‘here be dragons’ parts of coding unless someone intentionally chooses to jump over the handrail - just like the borrow checker does. This implies work needs to be done to separate and warn/error on ‘unsafe unsafe’ and not ‘safe unsafe’. DEP/ASLR etc. should then really be the second tier of defence, since needing them means the attackers have already broken the front door down. But, on Windows/Linux such second-tier mitigations should still be used wherever possible.

I like that statement. I was more continuing from my earlier posts on pointing out the fact that, traditionally, far too much effort is expended on a handful of theoretical security features, while leaving a whole bunch of previously invented features unimplemented or not easily implementable. And this is coming from someone who’s spent disproportionate effort in pursuing quantum cryptography studies, mind you. So call it speaking from the far side of personal experience.


#82

It seems pretty straightforward to solve this from a PL point of view: have a way to mark data as being “secret”, make any expression that depends on a secret be secret except for some way to explicitly mark secret data as no longer secret (e.g. the final ciphertext). Have it be the compiler’s responsibility to ensure secrets are wiped from temporary locations (stack, registers) once no longer needed and to refrain from producing code whose timing depends on a secret.

The big problem is that support for these things is (afaik) absent in LLVM and there’s no way to enforce these things from a higher level if they’re not supported at a lower level.


#83

If fuzz testing is within scope of the WG, I’d be happy to be part of this. I’ve been hoping for a security or cryptography WG to form, so thanks for initiating the discussion @joshlf!


#84

I’m sorry, but there needs to be some consideration for FFI, which I would estimate is where a lot of the unsafe code usage is.

You can’t keep coming down on unsafe generically without considering FFI functionality.


#85

I’m interested in participating in this WG.


#86

I would like to contribute to this WG.

This would be definitely a great idea.


#87

This point may have been made already, so apologies if it has:

Security is trust. So what you are asking for is a way to trust crates and the language infrastructure. It’s important to remember this - many people trust SSL even for some CAs I find dubious at best. If there is a security team vetting packages, you trust that team (unless you vet everything yourself).

So my suggestions would be

  • No central committee responsible for vetting crates, but instead a peer review system, where if a crate sees interest (maybe a certain number of downloads, or dependent crates), it gets flagged for review. People are registered as reviewers, and a number of them are randomly chosen to review the crate. Once sufficient people have reviewed a crate, it gets a reviewed sticker. If anyone has concerns, there is some mechanism to resolve them (todo think of a mechanism). This is then repeated for new releases, debounced to some time period.
  • It is possible to only use reviewed/validated crates, this also disallows future versions until they are validated.
  • There is some mechanism to no only yank versions of crates, but mark them as vulnerable. Then when running cargo run/build/etc, you get told if you have any vulnerable packages. I think I saw this in npm. It might be possible to do this with yanked crates (you get a warning if you are using a yanked version).
  • Keep pushing fuzzing/testing/removing unsafe where possible. It would be good to have a WG that pushed this. It isn’t necessarily a programming issue, might be more of a human/best practice issue.

I think it is a very good idea overall to have a WG, mostly just to keep everyone thinking about the security aspect of everything, including design decisions. The rust core team are already very good on security (e.g. security@rust-lang.org, https://www.rust-lang.org/en-US/security.html). You would have to work out how a security WG would fit in with that.


#88

Totally agree - security is mostly psycology/best practice/the right people knowing the right things.


#89

Agreed. One can only dream of a world where all libraries are written in Rust. Given that code maintenance appears to be up to an order of magnitude easier in Rust compared to C, this is theoretically possible. Until then, however, unaudited C/C++ used in Rust calls remains in the “Low level” security and nothing relying on it can be guaranteed to not be exploitable by someone in their basement with Kali Linux. That’s just logically the only answer I see. Either pay for a code security audit or rewrite foreign code in Rust within secure Rust guidelines.


#90

I’d aggree that vetting is everyone’s job, not just a security group. I would, however, advocate for a web-of-trust type implementation that works in parallel with the crowdsourced security model you suggest. This means that an individual that is part of that web of trusted individuals can review a crate, and then confer a “badge” onto the crate that gives the crate something like “Rust Security Web of Trust approved” status. A crate does not have to have this status, but it would be good if they do.

These badges should be able to be applied by other entities as well. For instance, if a for-profit starts security auditing crates, the crate user can request the crate to be reviewed by this company and the badge “Security company X approved” added to that crate. This is so that individuals funding official crate audits can personally benefit from such audits while the rest of the community can re-use the code that was audited and marked as such. There should be a mechanism to register such badges before use of course. Some of these other entities can also enter the Rust Security Trust Web if their reputation with its members becomes good enough, which means they will be able to assign more than one security badge if they want ("Rust Security Web of Trust as well as their own).

I’d say the split in focus should be proactive vs reactive. Traditionally, an internal security group works to prioritise known weaknesses in existing code and eliminate these weaknesses somewhat in the order of importance. The Security WG would deal with providing a sufficient set of tools that cover the base of a complete security solution for other coders to use to gain a reasonable level of assurance of the security of the code they use as well as what they produce themselves. The complete range of tools required should be agreed upon and the order of priority of each decided on to some extent.

As such, the Security WG should aim to minimise the number of security fixes they do themselves and always aim to educate and guide the Rust programming community to program securely from the start, as far as is possible.


#91

The areas to look at is:

  1. Secure Unsafe vs Insecure Unsafe Guidelines,

  2. Automated code testing and security fuzzing

  3. Reliable Cryptographic implementations

  4. Some reasonable level of code security auditing

  5. Concise education for Rustaceans on security best practises

  6. A mechanism to inform and encourage usage of updated code as well as a means to rapidly and effortlessly (hopefully) patch applications running code in which a security flaw was found.

What am I missing? Where should the FFI concerns fall under?


#92

On 6), I was wondering whether a fine-grained mechanism can be built to ensure that security updates to crates get the appropriate level of attention. Meaning that the crate user should be able to set a notification of sorts if both the crate and the function or macro he or she uses has been security updated. If the crate had a security flaw, but the coder had not used that code in their program, then the crate should still be updated in the coder’s build system for future security, but they would not have to concern themselves that already-compiled code needs patching. This would make it less of an effort to manually read through security changelogs to try and figure out which of a hundred fixes apply to which of your programs out in the field.


#93

Another one I’m highly interested and missing in that list is:

  1. Isolation of unsafe code. Make available abstractions over capabilities or strong sandboxing features for each platform. I’d add that we should have a framework for automating as much as possible of it.

#94

An interesting case that has come up that seems like a good thing to tackle for this WG that, if solved in a way the community agrees is correct, should provide a lot of clarity around what is an isn’t safe is the notion of safety with respect to MMAP: https://users.rust-lang.org/t/how-unsafe-is-mmap/19635


#95

There’s an open issue on cargo-audit about this:

I think cargo-geiger might be a good place to implement that sort of thing. I’d be happy to add the appropriate metadata to RustSec advisories for a tool like that to consume.


#96

This has been a particular interest of mine. unsafe seems like a nice foundation upon which Rust could be turned into an OCap language. I’ve been thinking along the lines of tagging unsafe usages with names that are exported as capabilities, and each crate can both export and consume capabilities, with the consumer deciding what’s allowed in each case at each level of the crate hierarchy.


#97

I don’t have enough experience or time to be part of a WG-security. It does feel like there is an informal “fuzzing working group” already, though (or let’s say the “WG status of rust-fuzz is fuzzy”); and I’m interested in continuing to improve Rust’s fuzzing support (esp. tooling).


#98

Having Security Working Group is highly necessary.

Following events made me worry.


#99

Would it be possible to have some kind of automated “security” scoring that is run over all crates? For instance, weighting items such as: use of unsafe, FFIs, clippy warnings, “security” score of dependencies, known open CVEs, total CVE count, average time to close an issue in github, community size, number of projects depending directly or indirectly from the crate, etc, etc.

Although, this would never be perfect, it could provide a good indication of the quality of the crates.


#100

Yep. Anything that will contribute to educating users of crates about the crates they intend to use should be investigated.

One recent example of how laughably insecure things on the internet currently are (seems lke mcafee disputes what exactly counts as a hack):

One wonders why academic institutions aren’t giving more attention to ensuring the security of real-world applications. Perhaps because “fixing things we previously claimed were unbreakable” doesn’t look so good on a research funding application. Sad, because we really need more work to be done on real-world security, not proof-on-paper security.