Proposal: Security Working Group


In other news, here’s a company that’s doing some real-world work on the matter:

Would be a great start to re-apply their work to the Rust lang ecosystem.


Being publicly/government funded, academic institutions, by nature focus on basic research and things that may either never give dividends (because it didn’t lead anywhere) or only give dividends very long term, e.g. 10-100 years.

That said, there are certainly academic institutions that focus on real world problems, for example, my university (Chalmers) does focus on things like Language Based Security with threats such as XSS and solutions like information flow control, formal methods and PLT. Another institution produced a paper like Dependent Information Flow Types with which you can be very precise about IFC and at the same time get a lot of compile time verification about flows.


Hey, has a space or whatever been set up for this WG yet?


Let’s create wg-security in Zulip. I think I don’t have permissions to do it though.


I am definitely interested in being a part of this Working Group.


Hi all,

In a recent core team meeting, we discussed the idea of forming this “Security WG”. We’re pretty excited about the idea, but there are a few steps that would be good before making it “official”.

  • Scope: The most important thing is that we would want to have a kind of clearly defined “scope” or “mission statement” for the working group. It doesn’t necessarily have to be super long but it should be fairly clear. The post announcing the Domain Working Groups contains several examples.
    • Actually, the “original post” of this thread is the kind of thing we are talking about, but I’m not sure if plans changed during the course of the discussion.
    • It might also be interesting to enumerate things that are not in scope (e.g., triage vulnerability reports).
  • Name: We need to settle on a name – the “Security WG” is pretty vague and sounds like the sort of place one might send a security-related bug report. But that’s not really the plan for this WG, so can we find a better name?

In general, working groups are meant to be affiliated with some full Rust team (e.g., the NLL WG is under the compiler team): in this case, that would probably just be the “Core team”, since this doesn’t seem like a WG that derives naturally from any other. That’s fine.

Stepping back, I think we’re still experimenting with the precise process we should use for creating WGs. I’d like to see us create a central listing of Rust working groups, where you can easily see the scope, contact information, chat channel, and other things about each group. I’ll probably start by mocking this up in a GitHub issue, but hopefully this would eventually move to the web page or some other place.


Depending on what this WG intends to do, it could also be under T-libs (if it is mainly focused on crypto and library solutions) or T-lang (if it is mainly concerned with finding patterns of unsafe and ways to write that using safe code or proposing language features that make unsafe unnecessary; it could also be under both teams concurrently.


Isn’t there already an “Unsafe Code-Guidelines” Working Group that has that responsibility?


I think the UCG mainly work on thinking about what guarantees Rust should give and not so much about what we might want to add to safe Rust to avoid unsafe { .. } to begin with.


Hi folks!

Sorry for the long delay over here - I got surprise burried in work, as one does :slight_smile:

Given Niko’s comment, I think it’s a good time to have a first meeting. I’m working on scheduling something, so stay tuned! The first meeting will be a video chat meeting, which I know isn’t ideal for some, but it’s going to be too open-ended to work well on chat. After that first meeting, we can look into a permanent solution that works better for folks.

This first meeting will be focused on figuring out what a Security WG should be - what it should focus on, what its scope should be, how it should be organized, etc.

I’ll post a follow-up in the next few days with details.


I propose the name Information Security WG

That means, that the WG aims to prevent bad actors from gaining the ability to subvert the Rust ecosystem and programs written in Rust for nefarious purposes. Such nefarious purposes could be to break privacy, integrity, or/and availability of programs written in Rust or against the Rust ecosystem itself.

This is a broad definition, and the work is expected to never end because it will be a game of counter-acting a moving target (black-hat hackers). Having said that, defining the Rust InfoSec WG primary focus as being proactive and educational, allows for multiplication of effectiveness since the tools will (should/must) be ultimately scalable.

This excludes from the WG scope the creation of pretty much anything that does not scale. It includes automated auditing tools, and excludes ad hoc security patches on crates beyond occational sampling and auditing of the state of crate security. Security auditing of Std-lib and a selection of commonly used crates could be included, but the list kept short. It includes Crypto libs and verification of correct crypto implementations, but that should be automated as far as possible. That should be enough to explain my idea on the WG scope.


There’s Slow Unsafe which you’d want to make Performant Unsafe using UCG. Then there’s Insecure Unsafe which you want to make Secure Unsafe.

Performant Unsafe is a valid design point for non-networked HPC applications. For everything else, there needs to be Secure Performant Unsafe (SPU).

That makes it the task of the InfoSec WG to generate SPUG, Secure Performant Unsafe Guidelines. And, yes, reduce the occurence of Unsafe altogether.


Might I suggest that the scope of this WG might be too broad?

This WG seems to fall under T-lang (new lang features that make unsafe unnecessary), T-libs (auditing crypto and other crates + trying to avoid unsafe with libs solutions), and T-dev-tools (automated auditing tools)…

It might be difficult to keep the WG working when it is this broad; and so it might perhaps make sense to have a big umbrella WG-InfoSec and then split it along the T-lang/libs/dev-tools lines to keep it more manageable?


That should be workable. Unfortunately there’s little alternative but to make it broad, since hackers will target the most susceptible part of the entire attack surface. Thus, narrow security is almost equivalent to no security. The suggestion is fair though.

Check what the status quo is:


So the main reason I suggested splitting it up a bit is that the expertise needed for each area differ. In particular, the experience needed for work on crypto (and evaluation) is very different from say IFC with mandatory access control which you may for example facilitate with a sort of SafeRust.

However, if you believe you can cope with having such broad and diverse responsibility, by all means, go ahead :slight_smile:


Isn’t that one of the purposes of WG’s, to work on cross-cutting (across “Teams”) concerns like this? Or is that not a goal of WG’s?


Sure, I suppose that’s true, but you also need to make sure that the WG has a coherent vision, agenda, set of things to work on.


Apologies if my comment came off argumentative. It wasn’t meant to be. I just wanted to clarify for myself whether or not WG’s were meant to be concerned with Cross-Cutting issues.


It did not :slight_smile:


Okay, let me copypaste a strawman proposal I came up with a while ago:

Turns out “security” is a very broad term, and the discussion is running in a zillion different directions, i.e. going nowhere. So I came up a strawman proposal that’s actionable and can be communicated clearly.

Namely: form a “Safety WG”. Mission: make memory errors a thing of the past.


  1. Identify when people use unsafe, eliminate the need to use unsafe in typical code by providing safe alternatives.
  2. Ensure that the safe alternatives are actually propagated through the ecosystem.
  3. Make sure the inherently unsafe code such as stdlib is trustworthy.

Communications channel for WG: a channel in because it’s a chat with discussion structured by topics, that’s where part of the core Rust team hangs out so they will be close by, that’s what unsafe code guidelines WG uses so it would be close by.

Status meetings: weekly, while the WG is young. May transition to once in two weeks later if/when we feel like it. 20:00 UTC every Wednesday so we won’t overlap with Unsafe Guidelines WG.

Place to put links and write down info: github repository, similarly to unsafe-code-guidelines

Here are some work items that are actionable right now:

  1. Pick a popular crate that uses unsafe, check out why it does that, try turn it into safe code without regressing performance; document how you did it if you succeed, document why that failed if you didn’t. Write it down in WG github repo. Marketable as “safety blitz”.
  2. Fuzz crates that haven’t been fuzzed, report discovered issues to maintainers and RustSec advisory DB. Also note what pattern caused the vuln to happen in the WG repo.
  3. #1 and #2 will let us gauge what patterns work, what don’t, what safe APIs are needed but missing. This should be turned into guidelines and/or Clippy warnings.
  4. Try out those safe wrappers for common unsafes (zerocopy crate), open PRs for real projects replacing unsafe with them and possibly safemem as well, see how that goes and whether it degrades performance/ergonomics/whatever. Record results in WG repo and on relevant wrapper crate issue tracker.
  5. Join the effort of verifying the standard library using fuzzers and quickcheck at
  6. Get Memory Sanitizer to work with Rust’s standard library (it currently doesn’t).
  7. Set up a bot that parses public index and periodically audits it for crates that depend on other crates known to be vulnerable by inspecting their Cargo.toml and Cargo.lock, and either files an issue automatically or alerts a human.
  8. Write a beginner’s guide to using SMACK to verify correctness of Rust programs; it’s a symbolic execution engine that has been adapted for Rust recently.

I have a blog post in the works on how I’ve discovered RustSec-2018-0004 and the shortcomings of the Rust APIs and/or idioms that made it possible, which should lay the groundwork for #1 and add some more actionable items in that category. The TL;DR is that it’s surprisingly hard to write into a preallocated buffer safely, and we should be doing something about it. I’ve started a thread about that here.