Disabling 'unsafe' by default

This discussion on reddit got me thinking got me thinking about making the usage of unsafe more restricted.

I think a reasonable case can be made for switching from “always allow unsafe unless #![forbid(unsafe_code)] is specified” to "always deny unsafe unless #![allow(unsafe_code)] is specified.

Would it make sense to write an RFC for this?

Pro

Allowing the availability of unsafe to be tied to features or other attributes

This allows for users to:

  • only allow the usage of unsafe when a specific feature is enabled (e.g. which adds the usage of some ffi crate)
  • make it explicit that unsafe is only needed on specific platforms (e.g. because the code uses FromRawFd on that platform)
  • add an “unsafe_optimizations” feature

Code can then add conditional opt-ins like:

#![cfg_attr(feature = "unsafe_optimizations", allow(unsafe_code))]

I’ts theoretically possible to opt-out under all other combinations, but that would be a lot more difficult to keep track of or to verify.

Higher bar of entry

In the spirit of “make easy things easy and hard thing possible”, we probably don’t want to make something that is hard (to reason about) easy as long as it’s possible. Forcing developers to take a bit of a longer route to use something might make them think twice about doing it.

More informed decisions

Making unsafe less directly available also forces (new) users to do more research before being able to use it. We don’t want to force a quiz upon them before divulging the way to opt out of safety, but at least provide a bit more background about the possible pitfalls about opting out.

Con

Making a language feature less accessible

The whole unsafe mechanism is one often touted as being one of the most significant language features Rust brings to the table.

On the other hand this change would only force users to add 1 line per source file where they want to use unsafe in, but they also get the option to better control the scope in which it is available.

Breaking change

Switching to by default disallow unsafe would be a breaking change.

One option might be to add an extra option to Cargo.toml with which the default can be changed that will default to the current behavior when not defined.

Another option would be to wait until the next edition of Rust and switch when crates opt-in to the new edition.

3 Likes

I understand the motivation, but - unsafe is already opt-in. It doesn’t seem like locking it away behind an additional door is going to spur people away from using it. As was noted in the actix-web “fiasco”, the author(s) there went out of their way to disable multiple unsafe-related lints (rustc, clippy). I seriously doubt that the proposed change here will really make a dent, and when you take into account its non-trivial impact (e.g. it’s a breaking change), I don’t think it carries its weight.

What I do think needs priority, and is possibly already getting it, is more formalism and clarity around proper usage of unsafe - i.e. the unsafe usage guidelines. As it stands, there’re still lots of open questions around even fundamental things like what exactly constitutes UB - is it forming two &mut references to the same data, or actually reading/writing through them. And so on. If you’re locking away unsafe further in hopes that people will ask more questions about it, then we need to be prepared to actually be able to answer those questions definitively. And I don’t think we’re there yet.

So, I’d like to see that latter area expanded on further before contemplating how to further box up unsafe, if at all.

19 Likes

You’re talking about how being able to reason about improving the valid uses of unsafe, with which I wholeheartedly agree with.

This isn’t meant to prevent the actix-web “fiasco”, which was only a “fiasco” in the case of implicit “trust”. And like with everything, it’s: trust but verify. The use case of this additional opt-in would be to improve the ability to “scope” unsafe and thus for people to more easily “verify”.

I think the article the reddit discussion was about is a better case for that. Here the inflate crate has a specific use case for it’s single remaining use of unsafe (performance). For this case a simple feature could be introduced (eg. the “unsafe_optimizations”, which is by default enabled).

Now say I don’t really care about the performance and I’ll gladly trade this CPU cycles for the knowledge that my dependency doesn’t do any (possibly) funky unsafe things. In the current situation I’d have to verify that #![forbid(unsafe_code)] is in effect everywhere, which can be a massive undertaking. If the default were flipped I’d only have to verify in which cases #![allow(unsafe_code)] is in effect and check which attributes are set to verify that unsafe can not be used without the compiler complaining.

So by switching the default and requiring the additional opt-in to the unsafety opt-in keyword, we get a situation where it’s a lot easier to reason about if the presence of unsafe in the source code is something I have to worry about, or that it’s only tied to some very specific use case.

1 Like

But this is already possible with the existing feature flags option. It's certainly up to the crate author to elect to gate it and provide safe (but slower) and unsafe (faster) versions, but ultimately, everything comes down to trust and being responsible, as you say. I'm pretty sure I've seen crates that do just this.

Ok, and what about transitive dependencies of the crate you're relying on? And what about bottoming out in std. I understand that one will have more trust about unsafe usage in std, but what about the grey area in the middle? It sounds like for non-leaf crates, there will be quite a bit of inspection going on.

That said, I do agree that somehow surfacing unsafe usage in a crate is a good idea. But for me personally, the approach you're suggesting is somewhat tangential to that and I'm having a hard time swallowing the breaking change pill.

And the reddit thread also goes into panics in safe code, which are also problematic in terms of "crate quality". So while perhaps not exploitable in a security context, it may still warrant inspection and/or relies on some aspects of "trust". So if we're talking about "can I trust this crate to be stable", the spectrum widens quite drastically.

One potential use case for Rust, which today is served primarily by Ada/SPARC, is in writing SIL4 safety-critical software. The high-volume (> 108 units/yr) applications I foresee are in the steering, braking and related subsystems in vehicles, where the underlying hardware platform will be something like a TI TMS570LC4357 with a Cortex-R core. That hardware includes dual-core lock-step CPUs with ECC caches and ECC on flash and RAM. The software will require similar error-avoidance and error-recovery measures. The RFC discussed in this thread could provide one such measure.

2 Likes

How would you address the fact that panics can be rampant in safe code? Or is the only concern memory safety violations?

Great question! To me panic is a thread-local issue, whereas UB is completely pervasive. See Ralf Jung's recent post in another thread to this effect. Encapsulation and local recovery/restart can be applied to panics, but there is no recovery from UB because it's impossible to determine in advance what the compiler might do when it's API contract is broken by UB.

5 Likes

how does this interact with macros, such as array_ref?

I agree with that perspective. I don't develop safety-critical software, but I believe SIL4 is the highest certification level and was curious on your thoughts about other reliability concerns, such as panics (nevermind that rustc itself would need to certified). But I don't want to hijack this thread so I'll let this rest.

1 Like

My direct experience with safety-critical hardware and software is in industrial process control and in avionics. It’s an area that has avoided higher-level languages in general because of the lack of trust in the resultant code. Ada/SPARC is the one current exception that I know of. I have hoped that Rust could become another. I’m actually neutral on this specific RFC, but I wished to raise awareness of the general topic. If the safety systems in our cars and airplanes end up depending on software written in Rust, then we have a personal reason to be proactive in finding ways to enhance fault detection and limit fault propagation in Rust, if only to protect our own well-being and that of our families.

2 Likes

“Unsafe” as a name unfortunately has strong implication of “it’s bad”, so it’s not surprising that it’s tempting to say #[deny(all bad stuff)].

However, unsafe actually means “Trust me, it’s safe, I know what I’m doing”. So by denying unsafe, you’re denying programmers the freedom to implement things in the ways they think are best or necessary.

There are managed/interpreted languages with enough power inside their sandboxes to be able to not trust programmers at all, but Rust isn’t such language. Safe Rust isn’t exactly a toy language, but from the beginning it was designed with assumption that Safe Rust by itself doesn’t have to be able to do everything, and can be a limited, incomplete language.

5 Likes

Many of the discussions go about how bad crates with unsafe are, but I think unsafe is great and don't want crates that contain unsafe to be negatively judged upon because they use unsafe for very specific use cases.

By being able to scope the availability of unsafe (and use the compiler to enforce that it's not available otherwise) I think many of these cases can be tied to specific features people might not be using. Or to provide safe (but possibly slow) alternatives and make it clear for people that these occurrences of unsafe are only available under those circumstances.

Thus allowing people to more easily trust crates that contain unsafe for things they don't use when the crate uses unsafe in such a conscious manner.

The problem this tries to solve is not about the trust in code quality or fitness of usage. Many crates I use are used in a way that they are not exposed to untrusted inputs. They get their specific job done and are generally "good enough".

The reason unsafe is problematic, as @Tom-Phinney pointed out, is undefined behavior. By triggering UB or memory corruption the use of these non-critical crates can influence the parts of the program that do work with untrusted inputs. Suddenly a benign looking unsafe in one crate can cause a security problem in another very well validated and audited crate. That is the problem with invalid usage of unsafe.

And I agree with you that the specification of what can safely be done in unsafe is at least as important as limiting the usage of unsafe. And I absolutely don't want to prevent the usage of unsafe. But I'm looking for something that helps in limiting which uses of unsafe I might have to worry about.

True, but it's harder to validate that all uses are behind such feature flags.

Validating them one crate at a time. Often some crate has a dependency on another crate in a way that you can easily validate you're not actually going to hit that crate.

Many (most) uses of unsafe are properly scoped inside a module or a crate. In most cases their use should thus be considered within that crate.

By making a crate wide option that unsafe should be forbidden by default (and perhaps only flipping the default with a new edition) that breakage should be pretty painless.

I’m generally positively inclined towards proposals that enhance the state of correctness and static guarantees in the world of computing and software. To that end, I want to be able to have fine grained control over what effects my crate and other crates (including the entire dependency graph) are allowed to have. For example, I want to ensure that it is impossible for a crate X in the reflexive transitive dependency closure of my crate to:

  1. cause UB (no unsafe { .. })
  2. cause side effects (no fn, only const fn)
  3. cause panics (so they are only allowed to call !panic fn, which doesn’t exist today but…)
  4. cause any form of divergence including infinite loops (i.e only allowed to call total fn).

the latter 3 I can ostensibly make happen by creating wrapper const fn (and total, …) around the functions from the dependencies, but it would be nice to automate denying 2-4 with compiler support. I’d also like to control which of 1-4 are possible selectively and mix and match them.

Together with possible dependent / refinement typing schemes, this makes Rust capable to enforce very strict rules that should further enhance the safety guarantees that @Tom-Phinney is seeking for avionics and such.

However, I don’t think that we should make unsafe_code deny-by-default as that will both induce unnecessary breakage and may not be right for all applications. But I do think that standardizing on unsafe_optimizations is a great idea. It would also be a good idea to forbid io_code and such.

2 Likes

I case someone is worried about tools that measure “unsafety” of crates by counting number of unsafe blocks or lines of unsafe code in a crate, here’s a solution to bring it down to just one unsafe line:

macro_rules! hold_my_beer {
    ($e:expr) => {unsafe { $e }}
}
3 Likes

Presumably that tool would run something like cargo expand and count the number of uses post macro expansion :wink:

1 Like
pub fn ok_fine<A,B>(a: A) -> B {
   unsafe { std::mem::transmute(a) }
}

:wink:

3 Likes

Obviously you can beat static analysis on if you want to with this, but then you’re doing it on purpose, and it is straight up UB :wink:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.