Proposal: Security Working Group

I meant whether LLVM supported an annotation to force the generation of constant-time code.

Without it, I don’t think we can feasibly emulate the lack of such an LLVM feature in rustc (nothing is impossible, but this would be… challenging).

My reply was meant to show that LLVM already contains the required backend support. A grep of the LLVM 6.0.1 source provides over 40k hits (in over 5k files total) on the string \W[fi]cmp\W. That also served to locate the interesting file include\llvm\Analysis\CmpInstAnalysis.h whose header reads

// This file holds routines to help analyse compare instructions
// and fold them into constants or other compare instructions

The above statistics – over 40k hits in over 5k files total – imply to me that the LLVM team has provided the needed infrastructure and we just haven’t found the means of invoking it for constant-time compares.

So you are saying that icmp and fcmp are guaranteed to be lowered to constant-time machine code and that you would like Rust C to emit those instead of some other IR for comparisons? If so, what is the IR that we are currently emitting?

EDIT: AFAICT the LLVM LangRef does not provide any constant-time guarantees for icmp/fcmp :confused:

I like your contributions. If you want your words to remain visible to the Rust community for future reference, you’d need to stick the LLVM discussion in its own thread. Otherwise, this LLVM discussion will just be repeated another time with no real visibility on what was said here because it’s hidden in another thread.

1 Like

There is an rfc issue open: https://github.com/rust-lang/rfcs/issues/847 So maybe we can move the discussion there?

A security WG would be great. One of its functions could be to catalogue all the wonderful security things people are working on, so as to streamline the work and increase co-operation. This would also ensure individual work is ‘productised’ and rounded off to be a) documented well b) easy to use c) has visibility in the standard documentation. d) maintained by a group.

I would add a top-three goal of the WG to be Education.

The Rust Programming Book should have a Security chapter covering common memory exploits, DoS attacks, Crate malicious code injection (were a crate is crafted to perform a useful function but the code has back door - effectively “how to spot a Trojan Horse crate”) or similar exploits. Also an introduction to performance vs “Safe Performance” unsafe code, secure Web App code examples as mentioned earlier, and, lastly, Crypto implementations. exercism.io should get a handful of security examples too.

In summary: Co-ordination for co-operation and long-term reliability with less duplication of effort and code. Education so that the Security WG doesn’t have to keep fixing the same common mistakes, and so programmers can be introduced to how easy it is to hack badly written code.

Here’s an idea: the Security WG should produce a book in two year’s time with a title like: “Secure Rust: Programming unhackable applications for the first time since the invention of the internet”.

The intro should start with two or three examples of common hacks, like where a ‘highly secure’ environment was hacked and totally pwned starting with a weaponised doc delivered with a sprinkle of social engineering.

Then everything that the Security WG has been working on and want to show the Rust community can be put in there.

The Security WG should also periodically Red-Team a handful of Crates and award points for security. C/C++ libraries would, at best, come out with results such as: A for “totally pwned”, B for “absolutely totally pwned” C for “supercalifragilisticexpialidociously pwned”.

Hopefully, Rust Crates can have a much better scale: A for less than 2 per 1k lines DoS & Mem corruption; B less than 4; C less than 6.

Results can go on to a Security WG “Wall of fame/shame”.

Having good examples to point to and documented is very useful for establishing good code habits. If most of the community documentation and blog posts uses the security standards, then developers new to Rust are less likely to make catastrophic mistakes by inexperience.

Think about the proliferation of “anyone can code in PHP” and the serious code quality issues that have resulted, where there is no cultural drive and motivation to code securely from the beginning.

2 Likes

Here’s my last idea (for now). The Security WG should liaise with the RISC-V Foundation. Rust should be the flagship implementation for the RISC-V Priviledged ISA Specification.

What better way as there to promote Rust than to climb on the secure IoT and embedded bandwagon (which is RISC-V)? RISC-V is also working on Smeltdown mitigations at the ISA level. Sounds like a match made in heaven.

3 Likes

Yep. Those Web for Pentester exercises makes one think a bunch of kindergarten kids built the Web 1.0 (that would be a joke if it wasn’t so true btw). Although, hindsight vision is perfect, and China wasn’t spying on everyone back then (yep, only the NSA was).

What would be a complete joke, however, is if Web 3.0 did not manage to correct those mistakes. So, there is the opportunity to do plenty of good here. It would be necessary to talk about how exactly to promote the security of existing crates while not blocking emerging crates. Rust crates should also be smaller and application-specific (based on what I think the community prefers) while not too small so that when the maintainers immigrate to Mars the crate can live on.

Thus, it can be seen that a Security WG would be a superset of a Crates WG, and ultimately they may need to be granted benevolent dictatorship status over certain aspects of crates.io, or otherwise, how are they going to lovingly force Rustacean minions into secure coding practices?

Hence, we can see that Rust Security WG could (should) end up having the same impact as the Rust borrow-checker on newcomers. aka Rustacean-shrimp one week in: “My crate is at version 0.9.0 but it won’t let me publish version 1.0.0 of my crate!” 3 Months later after reading the Security Chapter in The Book and some Unsafe Guidelines: “Ooo, I see, it prevented me from pointing a loaded multi-barrel machine gun at my own foot and my friends’ heads”

1 Like

I vote for this as the new motto for the Rust front-page: Let's reword front page claims - #31 by BatmanAoD

3 Likes

I think that finding a way of having Programming-Language-guaranteed constant-time and memory-zeroing code, although an interesting an important task, is mostly a big PL-research and implementation task.

The “application” security concerns in Rust (i.e., making sure Cargo is secured, finding out good coding practices and secure libraries for Rust code, managing vulnerability disclosure, etc.) feel less “researchy” and more “supervisory”, and should probably be done in parallel (I think that there is a fairly small amount of overlap between the people who are interested in working on both tasks).

4 Likes

Forgive me if I missed an earlier post on this topic, but I don’t see anything in this thread regarding implementing or leveraging platform mitigations. A user on the user forums was interested in this topic recently. A cursory glance at the Rust documentation revealed no information about what (if any) kinds of mitigation are available to Rust programmers. I’d love to see the security working group take point on this and implement and document what mitigations are available on which platforms. For examples: ASLR, stack canaries, safe stack, control flow integrity, DEP, W^X, etc.

2 Likes

I think that an RFC issue such as https://github.com/rust-lang/rfcs/issues/2533 is a more permanent place to talk about the details of secret handling than this thread.

Yes, because, the day a major security break happens at a company due to a failure in the security proofs and not because of the failure in applying said proofs is the day I'll eat my hat.

So, by all means, continue with the cool researchy stuff, just be aware that (I'm being very generous here) less than 2% of exploits will target those stuff. The other 98%+ will target faulty implementations and faulty humans using said implementations.

Computer security at the application level can be separated into levels:

(Very High) Safe from all attacks regardless of resources for the next 80 years.

(High) safe from most government attacks

(Medium) safe from experienced hackers working mostly alone

(low) safe from script kiddies.

Research targets making coding secure to above Very High level. Security consultants in the field work on Medium and High levels mitigations. To put it in an analogy:

Here we see the completed part of the bucket representing the brilliant work researchers do to provide a solid and secure framework. The unfinished part of the bucket represents that fact that between the lecture halls and the field, 90% of the research work is nerfed/forgotten/misapplied. The security consultant is paid to explain to the software devs that the tools they are using to program (tools are represented by the bucket), will not hold water without a dedicated effort to address the security for each and every application individually.

That brings us to Rust Security WG. We cannot leave whatever the researchers do rotting in the academic journals or applied to 1% of applications. The research work needs to be built into the ecosystem to become invisible the way that bricks are invisible in a building - you know they are doing their job by the fact that nobody notices them.

The security WG will only succeed if both sides work together to make sure no figurative 'stave' of the bucket remains forgotten since computer security is a game of The Weakest Link. And, people on one side will need to become familiar with the other side to ensure the whole ecosystem is balanced. That way, the lowest stave of the bucket will be at least Medium level for the Rust ecosystem, default on and out-of-the-box. :smiley:

3 Likes

Yep, although the examples you mention are rustc implementations, but it is a good point. Other platform mitigation examples can be found here.

All those mitigations you mention are to prevent chained memory exploits. Which shows you how terrible C/C++ coding is at memory management. The interesting thing is that, if you can ensure memory management is done 100% correctly, all those mitigations become unnecessary.

Continuing from my earlier discussion with the platform mitigations as example:

Since Rust has been shown to provide muuuuch better memory management, everything else security-wise suddenly becomes more important. It's the equivalent of living in a bad neighbourhood with a flimsy front door (C/C++), then you went out and got a 2-inch thick front door forged of Mithril in the fires of Balrog, and suddenly the fact that the windows don't have bars and the roof tiles are loose becomes a much bigger problem. Rinse and repeat with more Balrog huffing and puffing and you'll eventually end up with a rather nifty house that no wolf would ever be able to blow over. And, we're just past the front-door upgrade at the moment.

It should be clear, however, that a 10-inch front door (piling more and more security on rustc) won't solve the windows-without-bars problem in this analogy.

2 Likes

I propose that discussing specific, arcane security topics should not be done in this thread. It will be done in the spaces setup for the WG, once those are done.

3 Likes

Certainly interested in helping out here, though at this point I'm unclear as to what the scope of this WG is -- this discussion seems to be going everywhere.

(perhaps "security WG" is too broad a title for what your original intent was?)

4 Likes

AFAIK, most of the things I mentioned are actually not supported by rustc.

All those mitigations you mention are to prevent chained memory exploits. Which shows you how terrible C/C++ coding is at memory management. The interesting thing is that, if you can ensure memory management is done 100% correctly, all those mitigations become unnecessary.

Obviously the situation in Rust is much better but, as I understand it, many of these guarantees go away when you use unsafe {}. At that point, it's up to you to enforce type, memory, and thread safety just as it is in C\C++.

It should be clear, however, that a 10-inch front door (piling more and more security on rustc) won’t solve the windows-without-bars problem in this analogy.

Sorry, I wasn't suggesting this should be the only responsibility. Security in my mind is very much a "both-and" rather than an "either-or". We need guarantees from the language and we need defense-in-depth mitigations in the compiler and we need cryptographic primitives and we need to audit the ecosystem, etc.