Proposal: Security Working Group

That did it. Thank you.

1 Like

Anyone has notes from the meeting?, I couldn’t assist it this week.

Joshua took detailed notes. I believe he said he will post them later.

1 Like

I wasn’t planning on posting the notes themselves since they are pretty noisy, but certainly the summary. I’m just waiting to figure some last things out so I can post all at once rather than a series of small posts. This thread is long enough as is :stuck_out_tongue:


Argh, apparently missed notification that there was going to be a meeting yesterday. Ah well, next time!

What’s the status of the summary?

In other news, here is another great article about national cybersecurity:

We just met with the core team yesterday, and we’re waiting for them to make some decisions about administrative details. We should have more for you in the next day or so.

@joshlf At the very least you should be able to post the meeting notes for those who weren’t able to attend the meeting.

Cool. Glad to see it being official and all. As such matters should.

OK, heard back from the core team. It’s on!

Mission and Scope

The WG’s mission is to make it easy to write secure code in Rust. We have the following concrete goals:

  1. Most tasks shouldn’t require dangerous features such as unsafe. This includes FFI.
  2. Mistakes in security code should be easily caught.
  3. It should be clear to programmers how to use security-sensitive APIs correctly.
  4. Security-critical code which is relied on by Rust programmers should be bug free.

Here are some examples of some work that we might do in service of these goals. All of these examples are possibilities, but we aren’t guaranteeing that we’ll tackle any given work item, and the list isn’t exhaustive.

  • Most tasks shouldn’t require dangerous features such as unsafe. This includes FFI.
    • Identify common uses of unsafe or other dangerous features, and write crates/stdlib features/language features to provide the same functionality behind safe APIs
  • Mistakes in security code should be easily caught.
    • Write clippy lints for common security mistakes
    • Make sure it’s easy to integrate new static analysis tools into Rust
  • It should be clear to programmers how to use security-sensitive APIs correctly.
    • Contribute to documentation of security-sensitive APIs in crates/stdlib
    • Write guidelines on how to design security-sensitive APIs
  • Security-critical code which is relied on by Rust programmers should be bug free.
    • Encourage/participate in bug hunting in stdlib/crates
    • Take ownership of the RustSec project

Out of Scope

Cryptography will be explicitly outside of the scope of the WG. There is interest from some in starting a Cryptography Working Group, but that will be left for other efforts.

The following responsibilities were proposed at various times in the preceding discussion, but we decided not to take those on as responsibilities for the WG:

  • Curating “approved” crates
    • Reasoning: It’s too early to get behind single solutions for many security problems, and important to leave room for more innovation and exploration. This may make sense in the future if there’s community consensus, but it doesn’t make sense now.
  • Support for memory corruption mitigations (stack canaries, CFI, etc)
    • Reasoning: Our efforts are better spent on improving memory safety and its adoption, which makes these mitigations unnecessary.
  • Fuzzing
    • Reasoning: There’s already a well-established fuzzing effort.
  • Rust security book
    • Reasoning: It’s unclear what would go in here, and in any case, not high enough priority to focus on right now.


The WG will report to the core team.


We want to make sure that people don’t mistakenly think that we’re in charge of things like triaging security issues (that’s the core team’s job). Some have expressed concern that the name “Security Working Group” might lead people to believe that we’re in charge of all security for the Rust project. As such, we’re going to try coming up with a different name, but we need suggestions! The name should be short and pithy, and should be consistent with our mission to make writing secure code easy. Some alternatives that have been proposed:

  • Secure Code WG
  • Ecosystem Security WG

If you have ideas, join the discussion on Twitter.



Is anything decided on the communication channels? Basically, now that it’s a thing, how do I participate?

We’re still working on that. We’ve got a GitHub org set up, but we’re working on communication besides that. Our administrative repo will have details once we’ve gotten the other stuff set up.

Sounds like a bootstrapping problem. How about if contributors want to help set things up?

It’s getting a bit hard to coordinate with all of the cooks in the kitchen at this point, but I appreciate the offer! That said, we’re looking to design a logo if that’s the sort of thing you’re into!

Sure, no problem, it’s called bootstrapping for a reason - pulling yourself up by your own bootstraps. The visibility of the process from my viewpoint is almost zero, though.

If you need the help, throw more tasks on this thread.

Now that we have a GitHub repo, I’d suggest opening issues there rather than putting them here:

Things are easily lost in the shuffle here, and are easier to track and link to on GitHub. I’ll open another to get things rolling… where to chat!

1 Like

OK, last post here, promise!

We’ve decided on Zulip for chat. If you want to get in touch with us, the options are:

See y’all there!

1 Like


Formally verifiably safe rust?

Rust is a new systems programming language that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust’s safety claims have been formally proven, and there is good reason to question whether they actually hold. Specifically, Rust employs a strong, ownership-based type system, but then extends the expressive power of this core type system through libraries that internally use unsafe features. In this paper, we give the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem.

That sounds like this RustBelt paper.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.