Proposed security disclosure policy


Hey all,

One of the things we haven’t done for 1.0 yet is to talk about security vulnerabilities in Rust, and how to report them. Below is a bog-standard policy, based on numerous other open source projects. We want to make sure to handle this right, and so I’m posting this here as a preview for some feedback over the next few days. We’ll have an actual page on with the final policy at release.

There are one or two small details, and feedback is, of course, very welcome.

Reporting a Bug

We take security very seriously. Thank you for taking the time to responsibly disclose any issues you find.

All security bugs in the Rust distribution should be reported by email to This list is delivered to small security team. Your email will be acknowledged within 24 hours, and you’ll receive a more detailed response to your email within 48 hours indicating the next steps in handling your report. If you would like, you can encrypt your report using our public key.

NB: we have not selected said small team, but it will be 2 or 3 people. Also, the key will be posted with the final announcement.

This email address receives a large amount of spam, so be sure to use a descriptive subject line to avoid having your report be missed. After the initial reply to your report, the security team will endeavor to keep you informed of the progress being made towards a fix and full announcement. As recommended by RFPolicy, these updates will be sent at least every five days. In reality, this is more likely to be every 24-48 hours.

If you have not received a reply to your email within 48 hours, or have not heard from the security team for the past five days, there are a few steps you can take:

  • Contact the current security coordinator (Some One) directly.
  • Contact the back-up contact (Some One) directly.
  • Post on the internals forums or ask in #rust-internals.

Please note that the discussion forums and #rust-internals IRC channel are public areas. When escalating in these venues, please do not discuss your issue. Simply say that you’re trying to get a hold of someone from the security team.

Disclosure Policy

The Rust project has a 5 step disclosure policy.

The security report is received and is assigned a primary handler. This person will coordinate the fix and release process. The problem is confirmed and a list of all affected versions is determined. Code is audited to find any potential similar problems. Fixes are prepared for all releases which are still under maintenance. These fixes are not committed to the public repository but rather held locally pending the announcement.

NB: We would like to eventually assign CVEs to vulnerabilties as well.

On the embargo date, the Rust security mailing list is sent a copy of the announcement. The changes are pushed to the public repository and new builds are deployed to Within 6 hours of the mailing list being notified, a copy of the advisory will be published on the Rust blog.

NB: We may choose a specific, external list, or, have a security tag here on internals, which can of course, be set up to send email notifications.

This process can take some time, especially when coordination is required with maintainers of other projects. Every effort will be made to handle the bug in as timely a manner as possible, however it’s important that we follow the release process above to ensure that the disclosure is handled in a consistent manner.

Receiving Security Updates

The best way to receive all the security announcements is to subscribe to the Rust security mailing list. The mailing list is very low traffic, and it receives the public notifications the moment the embargo is lifted.

No one outside the security team and the initial reporter will be notified prior to the lifting of the embargo. We regret that we cannot make exceptions to this policy for high traffic or important sites, as any disclosure beyond the minimum required to coordinate a fix could cause an early leak of the vulnerability.

Comments on This Policy

If you have any suggestions to improve this policy, please send an email to



Having a good discussion with Jacobian about the “no predisclosure” aspect here:

Django’s policy here:


Looks good, a few nits:

you might want to change hours/days to working hours/days - I imagine a bug report on Friday evening might (reasonably) not get attention until Monday morning.

How are security bugs tracked? Presumably they won’t be in public GH issues. Who has access to security bugs?

Fixes are prepared for all releases which are still under maintenance

Sounds too strict to me. I imagine for any security bug there is a trade-off between negative effects and effort/risk to fix. Some will not justify pushing to release, I’d have thought.

No one outside the security team and the initial reporter will be notified prior to the lifting of the embargo

This sounds wrong, unless the security team is larger than I am expecting. In particular the person fixing the bug might not be on the security team or people outside that team will have useful input to how to fix the bug or the severity of it.


Good nits, @nrc :smile:

you might want to change hours/days to working hours/days

Yeah, this is a good point. I know I personally would consider it to be just days, but depending on who is actually on the team, working days may be better.

How are security bugs tracked? Presumably they won’t be in public GH issues. Who has access to security bugs?

The individuals who monitor said list have access, as effectively, the emails are the tracking. We’re expecting to address these issues right away, rather than have them sit around for a long time. A compiler operates quite differently than a project like Firefox. We’ll see, though.

Sounds too strict to me.

I can’t actually remember, but last I remember, we’re not explicitly maintaing previous releases of Rust yet. Since 1.x releases are a drop-in upgrade, we expect the vast majority of users to be on the latest. That said, obviously it won’t be 100%, and there’s lots to consider here.

This sounds wrong, unless the security team is larger than I am expecting.

See the above Twitter thread with Jacobian for a good argument for having some sort of pre-disclosure list. I chose no disclosure at first because it’s not even clear that we will have any candidates for pre-disclosure for a while, as the usual suspects are people like Linux distros. I’m not categorially opposed to a pre-disclosure list, but starting off simpler seems prudent, or at least, that’s my line of thinking.


I imagine that some security bugs might be minor enough that they only warrant fixing on nightlies or beta, rather than pushing every one to release. Perhaps you are thinking that these would not be classes as ‘official security bugs’? Which makes me thing that maybe we should be explicit about having a way to say that the reported bug is not actually a security issue? The current proposal seems to imply that every reported bug will be treated as real.

I was thinking of developers within the Rust community rather than users of Rust here. I’m assuming that not everyone who contributes to Rust will be on the security team, but that sometimes their (as in any given contributor) input might be wanted.


Do we have an rfc on how many old versions of rust we are maintaining? I couldn’t find one. Closest I got was 507 and this thread but neither mention it.


Which bugs are severe enough that they should be hidden from the public (and might be eligible for a future bug bounty)?

For rustc and std:

  • Would the Leakpocalypse qualify? It violated Rust’s memory safety guarantees but wasn’t likely to affect applications.
  • On the other hand, what about libraries whose incorrect output leads to application vulnerabilities?

For servers:

  • Which servers, and which kinds of bugs, are in scope?

  • When should researchers feel comfortable testing live sites, and when should they set up their own instances? (Example policy)


I’d say that Leakpocalypse qualifies. A soundness violation means that you’ve erased Rust’s primary selling point, which is the guarantee that non-unsafe code will be memory-safe. It’s easy to imagine people relying on that quality for the security of their application, even if an exploit is as contrived in practice as Leakpocalypse.


The Ember security policy includes no predisclosure because Ember isn’t shipped with any distribution that may want to include package manager updates as soon as the vulnerability is announced.

In contrast, Ruby on Rails announces the vulnerability to distros@openwall 72 hours before the public announcement.

In Rust’s case, I think it makes sense to notify distros@openwall, since Rust is pretty likely to start getting packaged in at least some distributions pretty soon.


Modulo UFCS?

In case of really severe issues where an exploit could be easily crafted, I suggest pushing out to older releases too.


I think we’d probably need some classification system. Say: “Critical, Important, Moderate, Minor”.

If a bug is posted to the sec list and it’s triaged as Minor or “Not a security issue” it could be punted to the public issue tracker.

If it’s Critical the patch might need pushed back onto some number of previous releases so that distros like debian who’ll probably have ancient versions in their repositories can patch.

What to do with the middle categories I’m not so sure.


I find this cliche rather tiring:

We take security very seriously

Here’s a crack at something different:

“Safety is one of the core principles of Rust, and to that end, we would like to ensure that Rust is a secure language.”


All security bugs in the Rust distribution should be reported by email to

Perhaps encourage (but do not require) people to GPG encrypt messages sent to this list.


I don’t think it’s right to handle soundness issues via this process, at least at this stage. rustc’s code isn’t particularly robust, so there are probably many of them - at this stage I really really won’t trust rustc to contain malicious safe-Rust code, and some soundness issues require a significant amount of effort to fix and/or break significant amounts of code.

Until we get much better on that front, I would prefer that only issues that can cause vulnerabilities in application code (e.g. some library function not performing bounds checking in some cases) be reported via the security process.


I mean, I really wouldn’t want the Leakpocalypse to be handled privately.


One thing that I meant to mention is that I’d prefer to decide which bugs count for this policy an RFC, to be kicked off last week. We need the mechanics of reporting in place for 1.0, and we can refine the guideline on what to report over a slightly longer timeframe.


I do not believe so right now, no.


It says it in the last paragraph:

If you would like, you can encrypt your report using our public key.


- delivered to small security team + delivered to a small security team


I would bullet point these five steps, rather than keep them in a single (wordy) paragraph.

Other than that, looks good. +1


The Leakpocalypse could have been handled privately by immediately destabilizing scoped and suggesting people stop using it. (which is more or less what happened anyway, minus the private ness)

Then after some time make it public and come up with a real solutoon.