Working around back-compat issues with unsafe?


With a heavy heart, I’ve included a section in an RFC I submitted that asks to expand the definition of “unsafe” to allow marking code as “unsafe” to avoid backwards-compatibility hazards when safe code changes. The RFC may or may not be accepted, but either way, I think the question about whether we should be able to use “unsafe” to address this sort of concern deserves discussion?


Expanding “unsafe”

Interior::from_outer_unsafe is not actually “unsafe” (in my understanding of Rust’s current definition of unsafe), but it does pose a backwards-compatibility issue: In current Rust, there is no way that the destructor for a field of a container will run unless the destructor for the container has also run, and it is possible that there is Rust code in the wild that relies on this property being true. from_outer_unsafe changes this: using this function, it’s possible to cause a field’s destructors to be called without having called the container’s destructor. Rust explicitly allows that failing to call a destructor is “safe” behavior, so the from_outer_unsafe behavior should be considered “safe” from a technical perspective. But, borrowing an example from @eefriedman:

struct MaybeCrashOnDrop { c: bool }
impl Drop for MaybeCrashOnDrop {
    fn drop(&mut self) {
        if self.c {
            unsafe { *(1 as *mut u8) = 0 }
pub struct InteriorUnsafe { m: MaybeCrashOnDrop }
impl InteriorUnsafe {
    pub fn new() -> InteriorUnsafe {
        InteriorUnsafe { m: MaybeCrashOnDrop{ c: true } }
impl Drop for InteriorUnsafe {
    fn drop(&mut self) {
        self.m.c = false;

The above program would not crash with safe code in current Rust, but could be made to crash, by safe code, if Interior::from_outer were made safe.

An alternative design is discussed below under Alternatives, but I could not identify an alternative that did all of the following:

  • Preserved the current definition of unsafe;
  • Did not pose a backwards-compatibility risk;
  • Integrated the implementations of Drop and DropInterior.

As such, this proposal requires expanding the definition of "unsafe" to also include “backwards compatibility risk”. Or, at least, to have an exception made that allows the unsafe marker on from_outer_unsafe to mean “backwards compatibility risk”, rather than “memory unsafe”.



I’m going to try to distill this, in hopes of getting some feedback:

  1. My current understanding is that we want to restrict the use of unsafe in the standard library to only refer to memory unsafety.
  2. Failing to call a destructor is “safe” by Rust’s definition.
  3. In current Rust, there is no way for the destructor for a field in a container to run before the destructor for the container itself.
  4. It’s possible to write currently-correct unsafe code that relies on this property: There’s an example up above.
  5. The Interior<T> proposal can prevent a container’s destructor from running, while allowing fields’ destructors to run.
  6. This is “safe”, by my understanding of Rust’s definition of safe: all we’ve done is prevent a destructor from running, which is a subset of what mem::forget does. (In fact, it’d be possible to write a type-specific mem::forget in terms of Interior<T>: just from_outer on the container type, and then recursively from_outer on all the Drop-implementing fields of Interior<T>.)
  7. But it’s also a backwards-compatibility risk. (See point 4.)

So, I think there’s a tension between the current definition of “unsafe” and “backwards compatibility”, revolving around the possibility that users might rely on the order in which destructors will be called. How can this tension be resolved? I only see the following options:

  1. Expand the definition of “unsafe” to encompass backwards-compatibility risk;
  2. Declare that the code example above is buggy, because it relies on having all destructors called.
  3. Do not accept Interior<T> in a form that might cause backwards compatibility risk (such as by disallowing Interior<T> to operate against Drop types).

Each of these has costs:

  1. means diluting the definition of unsafe;
  2. means it’s harder (possibly significantly harder) to write “correct” unsafe code;
  3. could mean forcing the compiler to maintain two parallel destructor mechanisms.

What am I missing? What’s the best approach?


ping @alexcrichton @aturon


The fundamental rule we currently follow for the unsafe keyword is this: no combination of functions can allow a crate which never uses the “unsafe” keyword to cause undefined behavior. Currently, there aren’t any other restrictions.

This has various implications. One is that whether a function needs to be marked unsafe can depend on whether other functions are marked unsafe. Another is that extending the functionality of the standard library or the language itself can involve backwards-compatibility hazards.

It’s possible we need a better rule; I’m not really sure what that would look like, though.


Expanding the definition of unsafe to include backwards compatibility isn’t something that I would personally want to see happen, but overall the guarantees that unsafe code relies on is definitely a little bit nebulous. I haven’t been following this discussion much about Interior but if it’s modifying the guarantees that unsafe code can rely on then it basically just means that measures may need to be put in place to protect against it.

I am aware of code that relies on the destruction order of fields for memory safety (it all hinges on unsafe internally at some point), but if we’d like to reserve the right to restructure the order then that seems fine to me so long as we message it clearly, test it, and perhaps provide an opt-in to a deterministic order. An example of this is that we no longer zero on moving but instead place a somewhat arbitrary bit pattern to future-proof against those who might rely on zeroing.


Thank you for putting it that way, it clarified something for me… What I’m talking about is unsafe being acceptable in an API when using the API would change the guarantees on which unsafe code might rely. That is, if an API does something that might make unsafe code misbehave, then, even if it can’t make “safe” code misbehave, the API should still be called unsafe. Does this seem more reasonable?


Just wanted to throw out the idea of introducing the function as unsafe now, but planning to (backwards-compatibly) drop the ‘unsafe’ marker at some point in the future when code is more likely to have been changed to take into account the modified guarantee. (Of course, if this is done, _unsafe should not be used in the function name.)


That seems like a good idea, but I’m not sure how I can make it work… Since Rust has a strong commitment to backwards compatibility, it’s not clear that I can or should rely on code being changed to reflect the modified guarantee, so I was planning on having two functions: Interior::from_outer<T: !Drop> (which is safe, because it doesn’t change any unsafe guarantees that legacy-Drop-implementers might rely on), and unsafe Interior::from_outer_unsafe<T> (which could change unsafe guarantees). I’m not sure if this is adequate, but I think I couldn’t change from_outer<> to safe while also imposing a <T: !Drop> bound on it, in a backwards-compatible way.

Either way, these functions would first be deployed as unstable, so I guess there’s be some wiggle room to answer this question over time, and I can proceed with prototyping an implementation. But I don’t think these functions should be stabilized without clarifying the rules around unsafe in this area…


A subtle point here: if unsafe code is wrapped in a safe function that code is now safe. So basically your widget-hoozit can’t cause established safe abstractions that we agreed should work to fall apart. If it only makes unsafe code that opts into using it unsound, then that is probably acceptable (unsafe is very “rules lawyer-y” in that regard. e.g. it’s “safe” to make null or dangling raw pointers… it’s just not safe to dereference any raw pointer)