[Pre-RFC] Another take at clarifying `unsafe` semantics

List of major changes since this was first published:

  • 2018-03-14 @ 22:06 CET -- Integrated feedback from @mark-i-m and @eternaleye
  • 2018-03-15 @ 21:37 CET -- Integrated some of the feedback from @CAD97.

The many meanings of the "unsafe" keyword in Rust unncessarily reduce its ergonomics and learnability. That point is regularly brought up, two recent examples on this forum being "Ambiguity of “unsafe” causes confusion in understanding" and "What does unsafe mean". Other threads of discussion also exist in other avenues, including RFC discussions and users forum posts.

So far, most discussions of this problem have stalled, and my understanding is that it is because they ended up with proposals which introduce excessive complexity or churn in the Rust ecosystem compared to their perceived usefulness. For this reason, I would like to propose here a purposely minimal and iterative syntax evolution, which I think would better fit Rust's stability-focused evolution process, while still fulfilling the intended goals of a clarification of unsafe abstraction semantics.

Current uses of the unsafe keyword

The unsafe keyword is used in several different circumstances, which I believe can be all narrowed down to two fundamental use cases:

  • Unsafe blocks are regions of code where one can take actions that violate memory/type/thread safety, and becomes responsible for maintaining these properties.
  • Unsafe abstractions are abstractions that rely on and/or provide safety-critical guarantees which are not (and usually cannot be) verified by the Rust compiler.

I believe that of those, unsafe blocks are by far the best-understood use case of unsafe in Rust, and can safely (pun intended) remain the way they are, whereas unsafe abstractions is an area that could use improvements. In the remainder of this post, I will thus focus the discussion on unsafe abstractions.

Much has been written about unsafe abstractions in the past, by illustrious minds such as @nikomatsakis and @withoutboats for example. I will not fully repeat these discussions here, but it is sufficient to state that they all revolve around the idea of an unsafe contract which is essential to memory, type and/or thread safety but cannot be checked by the compiler.

More specifically, I think that two ingredients from the theory of design-by-contract are critical to the correct building and understanding of unsafe abstractions:

  • Preconditions are properties which the user of an abstraction must guarantee in order for the abstraction to behave as intended.
  • Postcondition are properties which are guaranteed by the abstraction, and can be relied upon by the user of said abstraction.

The theory of design-by-contract also introduces the third notion of an invariant, which is not currently covered by this proposal, but can be introduced in the future if needed. This notion, and the motivation for not including it in this proposal, will be discussed further in the "Possible extensions" section.

High-level syntax proposal and transition plan

Coming from the above world view about the dual semantics of unsafe, two shortcomings of the existing unsafe abstraction syntax emerge immediately:

  • It fails to strongly expose the notion of an underlying software contract, by being syntaxically too close to an unsafe block, whose semantics are merely "trust me".
  • It does not mark a clear distinction between preconditions and postconditions, whose role in an unsafe codebase is however fundamentally different.

The way I propose to address these two shortcomings is to add two modifiers to the "unsafe" keyword, "pre" and "post", which can be used to clarify that the use of "unsafe" is about introducing unsafe preconditions and postconditions in abstractions. Using functions as an example, these modifiers could use the following syntax:

// This function is only safe if some input preconditions are met
unsafe(pre) fn i_trust_you(x: In) { /* ... */ }

// This function guarantees some safety-critical output postconditions
unsafe(post) fn you_trust_me() -> Out { /* ... */ }

// This function has both unsafe preconditions and unsafe postconditions
unsafe(pre, post) fn we_are_bound_contractually(x: In) -> Out { /* ... */ }

As I will detail further down these posts, these modified versions of unsafe aimed at abstraction design will have slightly different semantics with respect to the current "unsafe-for-abstractions" syntax, leveraging the added pre-/postcondition clarification in order to more closely reflect the abstraction author's intent.

If that functionality is found to be useful, accepted and implemented, then a progressive rollout of the new syntax can easily be envisioned:

  • Start by allowing usage of the modified forms of unsafe in abstractions
  • In the next Rust edition, lint againts usage of un-modified "unsafe" in abstractions
  • In the Rust edition after that, make it an error to use un-modified "unsafe"

Detailed design

Unsafe functions

As mentioned above, unsafe functions and methods can have unsafe preconditions, postconditions, or both. Adding these modifiers alters their semantics as follows:

  • unsafe(pre) functions can cause unsafety if some manually specified preconditions are not met. Therefore, they can only be called from unsafe code blocks. This is the most widespread use of unsafe functions today.
  • unsafe(post) functions provide guarantees which can be relied upon by unsafe code. Therefore, they can only return to their caller inside of an unsafe code block. These functions can safely be used when collecting information that is to be passed to an unsafe(pre) function, such as [T]::get_unchecked().
  • unsafe(pre, post) functions combine both of these semantics: they can cause unsafety if mishandled, and provide output which can be relied upon by unsafe code. A possible example would be a function which receives as a parameter a *const Vec<T> which is assumed to be valid and returns its length, which is then used for unchecked indexing.

Contrary to current unsafe fns, these modified forms of unsafe do not implicitly make the body of the function an unsafe block. The rationale for this is that only the author of the function knows which parts of the function, if any, are unsafe.

In fact, since it is an API-breaking change to add preconditions, we could well imagine abstraction designers proactively adding unsafe preconditions to their functions, but not using them right away, instead starting with a safe implementation instead with the intent of moving to an unsafe one later on. No "accidental unsafety" should result from this API choice, and removing the unsafe precondition later on if it does not prove useful should not be a breaking change. And conversely, adding unsafe postconditions to an existing function should not be a breaking change for users either, as it only makes the function safer to use.

These last two statements reflect an important rule of design-by-contract, which is the Liskov Substitution Principle. That principle can be spelled out in natural language as follows:

It is okay for an abstraction's implementation to require weaker preconditions than advertised on the outside, or to guarantee stronger postconditions than advertised on the outside. But requiring stronger preconditions or guaranteeing weaker postconditions than advertised is a violation of the abstraction's contract, and changing a published abstraction's contract in that direction is thus a breaking API change.

Unsafe traits

Unsafe traits go one step beyond unsafe functions by defining a contract which any implementation of that trait must uphold. Unsafe traits are matched with unsafe impls using the same unsafe modifiers, which are essentially a legally binding statement from the person that is implementing the trait:

I have read and understood the unsafe preconditions and postconditions of this trait. In accordance to the Liskov Substitution Principle, I promise not to rely on any further unsafe precondition, nor to violate any expected unsafe postcondition. I am allowed, however, to have weaker preconditions or stronger postconditions.

Like unsafe functions, implementations of an unsafe traits follow an unsafe contract: anyone who uses an implementation of an unsafe trait that has preconditions must follow its preconditions or face memory/type/thread unsafety. And anyone who uses an implementation of an unsafe trait that has postconditions can rely on them in safety-critical code.

However, this raises the immediate question of what "using a trait" means. As will be discussed in the "Unresolved questions" section, this nut actually proved harder to crack than expected. But after multiple iterations around this concept, the author of this Pre-RFC is now leaning towards the following definition:

A trait is used by adding that trait's name to a list of trait bounds.

This definition matches the three existing examples of stabilized unsafe traits in the Rust standard library, namely TrustedLen, Send, and Sync, along with a fourth trait which also has safety-critical postconditions, but is not unsafe because a Rust programmer is not allowed by the compiler to implement it incorrectly: Copy.

All of these traits have the following properties in common:

  • They are all marker traits, which do not provide any methods, and are used by being added to a set of trait bounds in generic code.
  • They all guarantee properties which unsafe code is allowed to rely upon, and which could be violated by a careless implementor, resulting in unsafety.
  • It is always safe to use them, they do not have safety-critical preconditions.

From these observations, I would tentatively reach the following conclusions:

  • All of these traits should be marked as unsafe(post).
  • Implementing them should require an unsafe(post) or unsafe marker (see below), as a statement by the implementor that the trait's postconditions have been upheld.
  • We do not need unsafe(pre) traits at this point in time, only unsafe(post) ones, and the first implementation of this pre-RFC should thus only provide the latter.

Here are some syntax mockups of what unsafe(post) traits could look like:

unsafe(post) trait SafetyCriticalProperty {}

// Syntax option 1: Impls also use the "post" modifier. The rationale
//                  for this syntax is that it is consistent with the
//                  trait's declaration.
unsafe(post) impl SafetyCriticalProperty for MySafeType {}

// Syntax option 2: Impls do not use the "post" modifier. The rationale
//                  for this syntax is that it is consistent with unsafe
//                  blocks, which are about _doing_ something unsafe, and
//                  we are indeed doing something unsafe by implementing
//                  an unsafe(post) trait for a type.
unsafe impl SafetyCriticalProperty for MySafeType {}

Finally, like any trait, unsafe(post) traits can have methods. These methods can rely on the host trait's postconditions, but if they play a critical role in ensuring said postconditions, then they should be marked as unsafe(post) as well to clarify this point to implementors and users. Therefore, and in contrast with previous versions of this proposal, I am not proposing anymore that methods of an unsafe(post) trait be implicitly marked as unsafe(post).

Unsafe associated methods

Like all methods, trait's associated methods can be unsafe(pre) or unsafe(post). In the context of trait definitions, this sets a precondition which every implementation of that method can rely on, or a postcondition which every implementation of that method must guarantee.

For example, let us assume that we lived in an alternate universe where the way in which TrustedLen modifies the semantics of Iterator::size_hint() by adding an unsafe postcondition to it would have been frowned upon as "spooky unsafe action at a distance". How could we have handled that differently? One possibility would have been to explicitly add a new Iterator method which upholds the required unsafe contract:

// There is really nothing unsafe about implementing this trait...
trait TrustedLen: Iterator {
    // ...as long as this dangerous method is correctly implemented.
    // (Because it is dangerous, we need an unsafe block to do so)
    unsafe(post) fn len(&self) -> usize;

    // Unlike len(), this method promises nothing and is thus not unsafe.
    // Reimplementing it as (len % 3) == 0, although evil, should thus not
    // mislead TrustedLen users into performing unsafe actions.
    fn len_is_even(&self) -> bool { (len % 2) == 0 }

Possible extensions

Unsafe invariants

In addition to preconditions and postconditions, design-by-contract introduces a third concept called invariants. Invariants are properties which are expected to remain true throughout the entire duration for which an abstraction is used.

Contrary to what was stated by a previous version of this proposal, invariants cannot be fully reduced to a precondition/postcondition pair which applies to each abstraction entry point, because such a reduction only makes a statement about properties which must hold true before and after the time where an abstraction is used, and not during that time. The difference between an invariant and a precondition/postcondition pair can be observed if an abstraction is concurrently used by multiple pieces of code, for example in the case where a Sync type with unsafe methods is shared by multiple threads.

I do not currently see a benefit in writing down the notion of an unsafe invariant in Rust code. It does not seem to introduce new constraints on the way rustc should process unsafe abstractions with respect to unsafe(pre, post), because it makes a statement about properties which must be true during the time where an abstraction is used, and Rust has no syntax for that. So in my view, this can probably remain a concept that is solely exposed through manually written documentation, at least initially.

However, should I be proven wrong, the proposed syntax could naturally be extended to encompass this concept by adding an unsafe(invariant) modifier, which implies unsafe(pre, post) and extends it with a third notion of the unsafe property's validity extending throughout the entire duration for which the unsafe abstraction is used.

Unsafe types

Much like unsafe functions can exist in isolation, one could envision allowing for "unsafe types", which follow an unsafe contract like an implementation of an unsafe trait.

// No matter how the definition of this type changes in the future,
// we guarantee that it will be forever possible to transmute pointers
// to Sized types into this type.
unsafe(post) struct OpaqueSizedPtr(usize)

In this case, every method of an unsafe(pre|post) type should probably be implicitly declared unsafe(pre|post), because is subjected to the type-wide preconditions and postconditions.

However, I would advise against requiring unsafe(pre|post) on the impl blocks of unsafe types for two reasons:

  • Not every type definition comes with an impl block. For example, type A = B doesn't.
  • Impl blocks are allowed to target multiple types in Rust, and authors of blanket impls should not need to consider whether their impls may cover unsafe types or not.

Because of this required syntax inconsistency, and because a need for unsafe types has not emerged so far in Rust's history, I would personally be against allowing for unsafe types unless a strong rationale for them is provided.

At the same time, however, I can see how they would "close the loop" of unsafe abstractions by allowing every Rust abstraction to have an unsafe contract attached to it. So I could be convinced that they are a worthy addition. Should they turn out to be wanted, the above mockup should be enough proof that the proposed syntax is flexible enough to allow for them.

unsafe(pre) traits

Although that could be considered an overall improvement to its consistency, this proposal does not suggest allowing for unsafe(pre) traits initially, for the following reasons:

  • The need for them has never arisen so far.
  • They would have remarkably confusing syntax and semantics.
  • Their simplest use cases seem to be fulfilled by unsafe(post) trait bounds.

According to the definition that was given above, an unsafe(pre) trait would be a trait for which the very action of adding that trait as a trait bound to generic code could cause memory/type/thread-unsafety if some safety-critical preconditions are not met.

It is very hard to envision a situation in which this would be the case. And in fact, we do not even have syntax for this as of today (unsafe markers in trait bounds, anyone?). What we do have today, however, is a way for a trait to rely on safety-critical preconditions from the underlying type, and that is actually simple: just put a trait bound on an unsafe(post) trait which guarantees said preconditions as a postcondition:

trait ImplMustBeSync: Sync {}

Given these considerations, I cannot think of any circumstance in which we would want to add unsafe(pre) traits to our unsafe abstraction vocabulary. But should they turn out to be actually needed someday, the syntax for them can again easily be made available.

Extended static analysis of unsafe contracts

The clarification of unsafe contracts that is proposed by this pre-RFC could be leveraged to build static code analyzers which lint some suspicious constructs in unsafe code.

The simplest form of analysis that could be done would be to assert that the documentation of an unsafe abstraction features a "Safety" section, in which abstraction authors are expected to precisely state the unsafe preconditions and postconditions which the abstraction builds upon.

Going one step further, more elaborate static analysis could also be used to assert that any input which is fed into an unsafe(pre) abstraction emerges from an unsafe block or from an unsafe(post) abstraction. Following this discipline would arguably smooth out the interaction between unsafe blocks and unsafe(post) contracts. However, there are some roadblocks that would need to be overcome before this dream may come true:

  • There should be a way to annotate individual inputs of unsafe abstractions in order to clarify which of these inputs are concerned by the unsafe preconditions and which aren't. Otherwise, false-positives would emerge from incorrect linting of information that is passed into non-safety-critical inputs.
  • Unsafe traits like TrustedLen which turn a non-unsafe(post) function into an unsafe(post) one would need to be deprecated, as these would require manual rule injections in the static analyzer in order to be properly handled.

Unresolved questions

Exact meaning of "using a trait"

During the development of this pre-RFC, pinning a precise meaning on the expression "using a trait" has turned out to be surprisingly difficult. Successive versions of this RFC started by associating using a trait to using its public interfaces (associated fns/consts/types/fields...), then begun to acknowledge the importance of trait bounds, before the current definition of "using a trait is adding it to a set of trait bounds" was proposed.

The difficulty of pinpointing this meaning should be considered by reviewers of this pre-RFC as a warning sign that this part of the pre-RFC is not very solid yet, and reviewers are thus strongly encouraged to investigate this matter on their own, come up with alternate definitions of this sentence, and propose them to the author. A more refined understanding of traits as an unsafe abstraction will likely result from this process.

Prior art

Design-by-contract has a long history. Its notions were first popularized by Eiffel, and then remained with us throughout the subsequent history of programming, being for example used in many modern formulations of the Liskov Substitution Principle.

One particular use of design-by-contract that was strongly influential on the author of this proposal was the integration of contracts in the Ada language as part of its Ada 2012 revision, and its justification in the accompanying design rationale document from John Barnes.

Formalizing Rust's unsafe code in terms of software contracts also has a long fruitful history, of which the links above only identify the most recent events. Previous discussions have already let to proposals of introducing more contract-centric semantics to unsafe abstraction building blocks, though none of these made it to the implementation stage so far.


  1. Do nothing. Unsafe abstractions will remain confusing to implement and review, making unsafe Rust hard to learn and causing memory, type and thread safety issues that could have been avoided by improving unsafe code ergonomics.
  2. Turn this minimal proposal into something bigger, such as a fully generic effects system or an in-depth integration of design-by-contract into the Rust language. Prior discussions have shown that this is not perceived to be a reasonable price for a clarification of Rust's unsafe abstraction semantics, and is likely to be rejected. Therefore, this path seems to eventually lead to the "do nothing" alternative.


This pre-RFC proposes some syntax and semantics additions which have, I think, the potential to make the "unsafe" keyword much less confusing when used for the purpose of building unsafe abstractions.

The proposed changes are designed to be minimal, and to allow for coexistence with existing code and iterative migration of said code. No existing semantic of "unsafe" is changed, all changes are purely additive and only meant to be ultimately enforced via the progressive linting process that was also discussed in the context of dyn Trait.

By implementing these changes, we can cleanly separate the currently confusing use of unsafe into three orthogonal concepts:

  • Unsafe code, which is allowed to break safety and responsible for not doing so
  • Unsafe preconditions, which must be upheld to guarantee the safety of an abstraction
  • Unsafe postconditions, which unsafe code is allowed to rely upon from an abstraction

Only the two latter uses of unsafe require (long-term) syntax and semantics changes in this proposal, and I believe that they are the minority when it comes to use of unsafe in current code, with unsafe blocks dominating by far. Moreover, the proposed semantics changes generally go towards expressing concepts which unsafe abstraction authors could generally only state in more confusing ways before. So I think that they should be received positively overall.

By clarifying the meaning of "unsafe" in an abstraction context, new forms of unsafe abstractions are also naturally enabled (such as freestanding unsafe(post) functions), which should naturally go to enrich the abstraction vocabulary of the unsafe Rust programmer.

So, what do you all think about this idea? Is there something important which I overlooked? A problem which I did not envision? Something that I should change? Or should I proceed towards turning this idea into an actual RFC?


I am pretty sure this is the same mistake I was making and is addressed by @eternaleye here (see the code sample in that post). The key is that an invariant should be true throughout the whole execution, whereas a pre/post condition does not have to be.

While I agree with the overall motivation, I respectfully disagree that this is the right approach... Rather, I think that the right approach is to do a better job at teaching about unsafe. When I was learning Rust, I was never unclear about when to use an unsafe block. In fact, I was never confused about unsafe until I had to start designing interfaces that use unsafe. At that point the question arises: "Should interface X be unsafe?". Perhaps there is a syntactic change that would make that more clear, but I don't know what it is.


You missed a beautiful opportunity here:

// This function has both unsafe preconditions and unsafe postconditions
unsafe(pre, post) fn we_are_bound_contractually(x: Input) -> Output { /* ... */ }

But also, yes, invariants cannot be built from preconditions and postconditions alone in the presence of any form of concurrency over state, including "calling other functions".

EDIT: I feel I better explained why this is the case in this Reddit thread - I'm /u/vitnokanla there, as eternaleye was already taken.


Ah yes, you are both right! That’s an especially silly mistake for me to make, since I’m a heavy writer of concurrent data structures, and keeping invariants upheld at all times is a big part of that. I will update the text accordingly later today :slight_smile:

One other thing to note - the Sync trait has no methods. As a result, your equivalence relation between an unsafe(pre) trait and a safe trait with all methods marked unsafe(pre) falls apart, as it would allow a safe impl of Sync.

In addition, your equivalence relation is already broken: you describe modeling “additional” preconditions with using both markers, but if desugared using the equivalence, this loses information.

1 Like

That is certainly a possible approach, and if I end up translating this to an actual RFC, I will obviously add a "do nothing" alternative.

At the same time, I can't help noticing how you tripped up on the exact same thing which I would like to improve, namely "unsafe" having an unclear meaning in interfaces. With this proposal, the role of unsafe in interfaces would be clearer, as it would be spelled out in terms of a software contract. You need unsafe(pre) if your safety relies on manually checked properties, and you need unsafe(post) if you are making a statement that others can rely on you for safety.

Teaching can always be used as a band-aid for low-usability language ares, but sometimes improving the language clarity can be a better long-term investment, if it does not come at too high a cost. I see a strong parallel with dyn Trait here: people often get tripped up with types and trait objects behaving differently since they have extremely similar syntax. Through syntaxic clarifications, we may be able to eliminate this common confusion entirely in the long run.

As a minor nitpick, Sync is unsafe(post) (it's implementors which are doing something unsafe by implementing it, not users which are doing something unsafe by using it), but I see what you are heading towards here.

Indeed, there should probably not be an equivalence here, but only an implication. If a trait is unsafe(pre/post), then it implies that every interface of that trait is unsafe(pre/post). But stronger semantics are attached by making the trait itself unsafe, which is what allows marker traits to work. Another thing that I should clarify in a future version. Gotta run for now :slight_smile:

1 Like

@mark-i-m @eternaleye I have now integrated your feedback in the proposal. Compared to the version which you have read, the new revision includes:

  • An improved and deeper discussion of unsafe invariants.
  • A fully rewritten discussion of unsafe traits, which puts more focus on what implementing an unsafe trait entails and how these differ from traits with unsafe methods.
  • A dedicated section on traits with unsafe methods, what this means from a contractual point of view, how it differs from unsafe traits and unsafe functions.
  • A brand new section on possible extensions of this design, which can be envisioned but are not currently proposed as part of this Pre-RFC
  • A brand new section on some prior art which the author found inspiring (this takes a lot of time to collect, better start early).
  • Better code mockups (less redundance, more jokes).
  • Many wording and stylistic fixes.

Can you have a look and tell me if you have any other feedback?

EDIT: More minor additions made since then:

  • Further wording tweaks (e.g. trait method -> associated methods)
  • More details on allowed unsafe fn contract changes (Liskov strikes again!)
  • Added a first take on an “Alternatives” section.

Can a trait even have an invariant that it, as a trait, requires to hold? I suppose maybe we could envision a world where Copy is an unsafe(invariant) that the type is always able to be cloned by copying its stack representation, or Send/Sync are because they always need to have that property, there’s no pre- or post- involved.

(Along those lines, is there any case for unsafe(pre) trait? I agree that it is conceptually different from unsafe(pre) on every fn, but how exactly can the trait as a whole have any preconditions?)

unsafe(invariant) might make sense for a method on a concrete type, where a publicly-settable member of the structure needs to be within some bounds for the method to be safe. pre is too weak: you can imagine a situation where the method is invoked with a safe state, then immediately the context switches to another thread which invalidates that precondition (if it’s based on interior mutability). But in practice, is there any case where this publicly-settable member being outside its safe range is not a logic error?

With that said, here’s a half-baked counter-proposal. Some of it could be dialed back to a more incremental approach; however, it all is included to paint what I see as the full picture. I’ve also always used unsafe(condition) for everything but unsafe { } for maximum explicitness but you could also theoretically not require the (condition) where only one condition may ever be present.

  • unsafe { /* code */ }: allows fulfillment of unsafe contracts, as today.

  • unsafe(pre) fn as described in the pRFC: the parameters meet some contract when the fn is called (beware interior mutability!)
  • unsafe(post) fn as described in the pRFC: the return value meets some contract that unsafe code may rely on for memory safety.
  • unsafe(invariant) fn: the parameters (this includes self) meet some contract for the entire execution duration of the fn. This is equivalent to unsafe(pre) modulo internal mutability.

  • unsafe(pre/post) trait: doesn’t seem to make sense: before and after what, exactly? They could imply unsafe(pre/post) on all inherent associated fn, but then I think that’s what they’d mainly be used for
  • unsafe(invariant) trait: the implementation of the trait itself means something about the concrete type. This is (I think) any unsafe marker trait that exists in the standard library today.
  • Samely-qualified unsafe must be present on the implementation for the trait.

  • unsafe(pre/post) struct: ??? what would this mean ???
  • unsafe(invariant) struct: members of this struct must meet some invariant. This means that modifying them is unsafe. (You could also potentially apply unsafe(invariant) to individual members for better granularity.) impl blocks are still safe.

A dialed back version of this would use unsafe(pre/post) for fn, and use bare unsafe for trait (and optionally struct) as unsafe(invariant). You could emulate unsafe(invariant) in unsafe(pre) in the face of interior mutability by taking by value or unique reference, or making one of the preconditions further stronger non-aliasing guarantees.

EDIT(like five seconds after posting): I realize that this fake unsafe(invariant) doesn’t work in the face of mutable globals, because I like to pretend those don’t exist :upside_down_face:

One challenge which I have with unsafe traits is to put a precise definition on the notion of using a trait from the point of view of unsafe contracts. I could definitely use some help in this area. Here is where I got so far:

  • Invoking one of the trait's associated methods? That clearly gets close to invoking an unsafe function in my eyes, and is the way I understand it in that RFC.
  • Somehow touching the trait's other associated items, like associated types, consts, or maybe fields someday? I do not currently discuss that at all. I'm not sure if there is a use of these that can directly make or break an unsafe contract. Review from a more experienced language lawyer would definitely be much welcome here!
  • Using the trait bound itself as a marker of an unsafe contract? This is how existing examples of unsafe marker traits like Sync and TrustedLen work. All current examples are about setting postconditions. If someone could think about a case where the mere act of adding a trait bound relies on unsafe preconditions, then we would somehow need to put trait bounds in an unsafe block, and that would easily get very messy, so I'm not sure if we want to allow for this.

That last realization about what unsafe(pre) trait could mean to unsafe abstraction authors is disturbing. I'm not sure if I want to allow for it anymore. More generally, I should probably add an "unresolved questions" section to the proposal which spells out the troubles which I have with defining "using a trait" precisely.

From this mental perspective, I think Copy is again an unsafe postcondition. There is nothing unsafe about adding a Copy trait bound to your generic code, or about using it by "moving" values to copy them. And implementing Copy comes with unsafe postconditions because unsafe code can rely on it working as expected. What is unsafe is the act of implementing Copy for a type, because we are stating a property which unsafe code relies on.

I would have a lot more to reply to your comments, but for now I'm out of time :expressionless:

I have published a new version of this proposal which integrates some of the above discussion. Here are the main changes:

  • Moved discussion of the Liskov Substitution Principle earlier, in the “Unsafe functions” section.
  • Rewrote the discussion of unsafe traits again, this time by understanding “using a trait” as “adding that trait to a list of trait bounds”. This turns out to dramatically simplify the discussion, while still correctly reflecting all current usage of unsafe traits in the stable portion of the standard library.
  • Removed unsafe(pre) traits from the proposal, and explained why in the “Possible extensions” section.
  • Added an alternate syntax proposal for impls of unsafe(post), and clarified what are the respective merits of both proposed syntaxes.
  • Removed requirement that methods of unsafe(post) traits be implicitly unsafe, which does not match the new definition of “using a trait” above.
  • Deeply rewrote the discussion of unsafe invariants in an attempt to clarify it.
  • Described a possible application of the proposed syntax clarifications to more in-depth static analysis of unsafe code, which would be able to lint against suspicious use of unsafe which is likely to be incorrect.
  • Added a discussion of the “using a trait” definition problem.

And now, back to the discussion with @CAD97 :slight_smile:

This does not match the definitions which are used in the current proposal.

The current proposal defines an unsafe abstraction to be an abstraction which introduces an unsafe contract between its implementor and its user. In the context of a trait, the implementor is the one who writes an impl block for that trait, and the user is the one who adds the trait to a list of trait bound.

Building on top of this, the proposal states that an unsafe invariant is a property which is guaranteed to be true for the entire duration during which the abstraction is used. Which implies, in particular, that the invariant is both a precondition and a precondition of the abstraction (though as was discussed before, the reverse property does not hold).

This means, in turn, that it is incorrect to say that Copy and Send/Sync have unsafe invariants, because these marker traits do not have unsafe preconditions. There is no circumstance under which it would be unsafe to rely on a correct implementation of these traits. All they do is to provide extra guarantees which unsafe code can rely on, which matches my definition of an unsafe postcondition.

If you believe that the definitions of the current proposal are incorrect or suprising, please share your ideas of terminology improvements with me! :slight_smile:

You are right that this notion is fishy after all. I have removed it from the latest version of the proposal, and added a subsection which goes into more details of what it could mean in the "Possible extensions" section.

As I state in the "unsafe invariants" section of the proposal, I certainly agree that there is such a thing as a notion of an unsafe invariant, which is a superset of both unsafe preconditions and postconditions (we expect the invariant to be true in the beginning, to still be true at the end, and then some).

My rationale for not initially adding invariants to the proposal (but leaving room for adding them in the future) is the following question: do we need to spell out the notion of an unsafe invariant in code? Or, from an alternative perspective, can rustc's behaviour be influenced in a beneficial way by making it aware of the existence of an unsafe invariant?

The motivation for this pre-RFC is that there is a clear benefit in exposing notions of unsafe preconditions and postconditions in code, and marking the distinction between those at the compiler level. Using functions as an example, unsafe preconditions mean that users need to be careful, which translates at the language level into a need to wrap the function calls in unsafe blocks, whereas unsafe postconditions mean that implementors need to be careful in order to preserve safety, which translates at the language level into a need to wrap the return statements in usafe blocks. This is a useful distinction, which allows usage of unsafe blocks in code to be dialed down to exactly as little as needed when compared with current usage.

In contrast, it is less clear to me at this point in time how exposing the notion of unsafe invariants at the language level could be used in order to alter the way rustc processes code in a way that clarifies the semantics of unsafe abstractions and makes them easier to implement correctly. One obvious consequence would be that the entire implementation of a function with unsafe invariants should be wrapped in an unsafe block (as with today's unsafe fns), but should that really be considered beneficial?

As discussed above, adding an unsafe function invariant probably means that the entire function body should be considered unsafe, as any action which the function's code takes may accidentally break the unsafe invariant. In that sense, any code construct inside of the unsafe(invariant) function's body becomes unsafe, including constructs which would normally be considered safe.

As discussed above, I now agree with you concerning unsafe(pre) trait, but understand existing uses of unsafe marker traits in the (stable) standard library to be about unsafe postconditions, not invariants. If you disagree about the underlying definitions from the proposal or about the rationale which together lead me to reach these conclusions, please tell me more about why.

As with traits, there is a definition problem here: what does "using a type" mean? Instantiating it, perhaps? Or perphaps manipulating the type or its members, directly or indirectly via methods? And assuming that we wanted unsafe types, how should we annotate the associated impl blocks? (see proposal)

Since unlike with traits, we have never had nor felt a need for unsafe types so far, I have decided that instead of banging my head on this question, I should probably just refrain from adding a notion of unsafe type to the language. This keeps the proposal as minimal as possible, and thus increases its odds of being eventually accepted.

Nevertheless, the current version of the proposal features some tentative discussion of what an unsafe type would be, and how it could be added in the future as an extension of this pre-RFC. See the "unsafe types" section.

Can we even attach nontrivial unsafe contracts to types whose members are all public? I believe there is a reason why current attempts at unsafe coding styles strongly suggest putting unsafe abstractions in a dedicated code module: there are not a lot of unsafe (or even safe) abstractions which one can build on top of a type with public members...

I believe this comes back to the discussion of unsafe(invariant) vs unsafe(post), so I will wait for you to reply to this one.

It’s been about two weeks since this pre-RFC has been around, the topic has been quiet for a while, and I now have two questions for you :slight_smile:

The first one is, do you have any other question, comment, concern… which is not currently listed in this thread or adequately answered by it? If so, please speak up!

The second one is, do you think I should turn this in RFC form? If so, I could use some help from more experienced RFC writers in order to tune the wording, add missing parts, etc. I put this one in poll form:

  • No
  • Yes
  • I could actually mentor that

0 voters

1 Like

While the current syntax is poor, the semantics are fine.

In particular currently unsafe functions are what you call “unsafe(pre)”, while unsafe traits are what you call “unsafe(post)”.

An “unsafe(pre) trait” is in current syntax simply a trait with all unsafe methods.

“unsafe(post) function” and “unsafe(post) types” exist as methods and associated types of unsafe traits.

“unsafe(post)” is meaningless for freestanding items since their implementation can’t change due to backwards compatibility.

However, it’s true that functions that are called by unsafe code can cause memory-unsafety if their functionality is non-compatibly broken, but that’s more of a property of the calling unsafe code. Requiring “unsafe(post)” on the callee seems inappropriate, since unsafe code may in theory want to rely on any behavior of safe code, so there is no way to figure out whether a function should be marked “unsafe(post)”

Having the syntax specify “unsafe(pre)” and “unsafe(post)” would have been better, but changing it now might not be really worth it.

Regarding “design-by-contract”, the proper solution is to add dependent types to Rust, so that pre and post conditions can be encoded as values of dependent types, and they can be statically checked.


Any contribution to the syntax bikeshed is much welcome :slight_smile: If you do not like the proposed syntax, feel free to use the unabbreviated "unsafe preconditions" and "unsafe postconditions" expressions, which can hopefully be more readily agreed upon.

An “unsafe(pre) trait” is in current syntax simply a trait with all unsafe methods.

While I started with this definition too (along with a variant for unsafe(post) traits), because it looked like a very convenient and sensible shorthand, I backtracked from it after I came to the realization that traits differ from the sum of their public interfaces. If that were not the case, there would be no marker traits.

It all boils down to this question: what does "using a trait" mean? Does it mean using the interfaces provided by that trait? Or does it mean adding that trait to a list of trait bounds in a generic construct?

You can only describe marker traits using the latter definition. And you can achieve the former semantics, albeit in a more laborious way, by marking all functions in the trait as unsafe. So I think that a trait's safety contract should be defined by what the trait guarantees about a type when used in generic bounds.

Which, in turn, means that I cannot think of an example in which a trait would be unsafe(pre), in the sense that adding this trait bound to generic code would introduce a risk of memory/type/thread unsafety if some conditions are not met.

“unsafe(post) function” and “unsafe(post) types” exist as methods and associated types of unsafe traits.

In your opinion, what would an unsafe(post) type be?

“unsafe(post)” is meaningless for freestanding items since their implementation can’t change due to backwards compatibility.

However, it’s true that functions that are called by unsafe code can cause memory-unsafety if their functionality is non-compatibly broken, but that’s more of a property of the calling unsafe code. Requiring “unsafe(post)” on the callee seems inappropriate, since unsafe code may in theory want to rely on any behavior of safe code, so there is no way to figure out whether a function should be marked “unsafe(post)”

This is, in my opinion at least, a failure of the current unsafe semantics which this proposal aims to address.

It is frequently stated in the Rust community that the safety of unsafe code cannot rely upon the correctness of safe code. For example, the safety of a HashTable cannot rely on the key type's implementations of Hash and Eq being correct. However, what this negative statement fails to accurately convey is that as a result there is actually remarkably little that unsafe code is allowed to rely upon.

By the above standards, it is nearly impossible to safely use any kind of abstraction (from functions to generics) in unsafe code. In doing so, one must painfully attempt to mentally step through all the ways the abstractions could be incorrect (an exercise which the human mind is notoriously bad at), before hopefully convincing oneself that they have been relatively well accounted for. That is, until a sufficiently broken implementation of the abstraction surprises us by showing that yes, we have still not accounted for every possible bug.

Because unsafe code cannot rely on any abstraction, except possibly the ones that were defined in the same crate and by the same author (and even then...), it is very difficult to "scale up" unsafe codebases to complex tasks. Some will argue that this is generally a good thing, as unsafe code is dangerous and should be used very sparingly and in extremely localized locations. However, in practice, crossing the boundary between safe and unsafe code often comes with run-time validation costs, so whenever unsafe is used for performance reason (such as in the process of building containers or high-performance algorithmic primitives), keeping its usage at a very fine granularity may not be viable.

The notion of unsafe(post) aims to address this lack of scalability of unsafe codebases by introducing the notion of safety-critical contracts. Because Rust's abstraction vocabulary is not (yet?) rich enough to express such a contract in code, it is to be expressed in comments, like all other contracts that pertain to unsafe code in Rust. But by formulating this contract, an abstraction author states "I recognize that a certain property is critical to memory safety, and I promise to guarantee this property for the entire lifetime of this abstraction, subjecting my code to the same level of scrutinity as any other safety-critical ("unsafe") code".

unsafe(post) is therefore a commitment that restricts an abstraction's implementation in order to allow it to be used in more contexts, much like const fn is.

Having the syntax specify “unsafe(pre)” and “unsafe(post)” would have been better, but changing it now might not be really worth it.

Well, this is the question which I am trying to answer in this pre-RFC thread. If we were in the pre-rust 1.0 days, then it would be clear to me that clarifying unsafe semantics is called for. Today, where it involves some amount of churn for the Rust ecosystem, I need to prove that the benefits outweigh the cost. Since this is a trade-off which, as the person proposing a change, I come ill-prepared to analyze alone, I am looking for external points of view on this matter.

Regarding “design-by-contract”, the proper solution is to add dependent types to Rust, so that pre and post conditions can be encoded as values of dependent types, and they can be statically checked.

While I would love stronger support for design-by-contract in Rust, allowing desired properties to be expressed in code, I think that it would not fulfill all of the use cases of this feature, for the following reasons:

  • Some safety-critical properties cannot be easily expressed in code. Think about the "Do not create two &mut-references to the contents of an UnsafeCell" for example: considering that the client gets an *mut and is allowed to do whatever it likes with it, how should this postcondition be expressed?
  • Some safety-critical properties cannot be checked statically and are too expensive to check dynamically. Think about "Integers used to index slices with get_unchecked() are in range", for example: the very reason why get_unchecked() exists is that dynamic bound checks are sometimes necessary to guarantee safety, but too expensive for indexing-intensive code.

Conversely, I understand that dependent types would be a much larger addition to the language than what this proposal suggests, which would need much stronger motivation, much deeper analysis of the problem and use cases, and would in general have much weaker chance of being accepted. In contrast, this proposal solves a well-defined and known problem in a minimal way, which is I think has been an important characteristic of a good RFC ever since Rust has stabilized.

1 Like

While I tend to agree that clarifying the safety semantics of Rust code is a worthwhile goal, I’m not convinced that any of this discussion or proposals actually make meaningful progress towards that goal. I’m really not certain what is missing from the discussion, but, I just don’t feel like anything that has been said does anything to illuminate the issue for the average developer who might be coming to Rust from another language.

I will say, there seems to be a number of issues being conflated. The idea of “Design by Contract” is not necessarily applicable only to the notion of “Safety” (in the Rust, “Avoid UB at all costs” sense). “Design by Contract” seems to me to have a much larger scope and discussing it too much within the confines of clarifying “Safety” (again, in the sense of “UB”) is too much “Scope Creep” (for an RFC like this).

I think how this whole subject should be approached (first) is instead of trying to nail down what “Safe/Unsafe” means, why not try enumerating all the things you can do in “unsafe code” and listing things that would be appropriate to check by a wrapper and things that would be more appropriate to have enforced elsewhere. Then, try to organize them into categories, and only then, see if we can’t nail down a better definition of Safe vs. Unsafe and how it relates to the categories identified.

Again, the whole thing just seems kind of misty. I feel about unsafe the way I feel about porn, I can’t really define it, but, I know it when I see it.

EDIT: To clarify further:

Regarding enumerating of examples. What I mean by this is to get specific. For example, let’s talk about and list all the ways that pointer dereferencing can result in UB and then let’s try to enumerate strategies for preventing that sort of abuse. Same for calling FFI. Same for calling “unsafe” functions. Same for implementing “unsafe” traits. Same for accessing static mutable variables. Let’s try to enumerate all the ways each of these things can result in bad things happening, and then try to list mitigations for each of those. Then try classifying/categorizing the kinds of bad things and the categories of mitigation.

Perhaps this has already been done in the literature and that is how Rust came to some of the solutions it has come to. If that is the case, chalk (see what I did there) it up to my ignorance.

If nothing else such a list and categorization might be helpful as an advance chapter in the Nomicon that would be better than any sort of definition in helping those new to Rust on how to understand what “unsafe” really means.

1 Like

Well, that's quite an exaggeration. What's hard for one person might be trivial for another. The unsafe keyword is, in my opinion, the smallest ergonomic problem in today's Rust. Adding pre and post won't suddenly enlighten all the programmers coming from high-level languages. It seems to me that you are significantly overestimating the problem.

If there is a problem at all — I don't feel at all that the current unsafe design has any clarity issues. Again, inevitably, there will always be newcomers who find certain concepts hard. This cannot possibly be fixed by just piling more syntax on top of what we already have — they will have to invest significant effort into learning new concepts such as unsafe code either way, and I would argue that adding more complexity to the language wouldn't help, conversely, it would impede their learning experience.

1 Like

I think part of our opinion divergence here lies in the fact that we are not looking at the same side of the issue.

From my understanding, your comment asks for a guide to all the existing uses of unsafe constructs in the Rust language and standard library. What can go wrong, how to avoid it, together with some attempts at organizing these problem/solution pairs.

In constrast, this pre-RFC is about the tools used by Rust developers to create new unsafe abstractions. Basically, any time the unsafe keyword is used in someone’s code for any other purpose than introducing an unsafe block.

The former issue is, as you say, largely a documentation issue, which I believe is targeted by many existing projects. The latter reaches beyond that, in the sense that Rust’s current vocabulary for building unsafe abstractions is unnecessarily ambiguous and confusing even when you are familiar with it, due to the way it mixes together unrelated notions such as…

  • Unsafe blocks (“where bad things can happen”)
  • Unsafe preconditions (“what you need to do to prevent it”)
  • Unsafe postconditions (“how we help you at the task”)

Addressing this language-side issue is what this pre-RFC is about. From this perspective, I believe that what you are discussing is not in conflict with what I am proposing, but rather an important orthogonal issue.

A limited form of dependent types are already being added to the language, see const generics.

Dependent types have great power / complexity ratio.

1 Like

I would not want to engage in a detailed comparison of Rust's ergonomic issues, but considering the amount of time and discussions it takes even highly experienced Rust developers to figure out the basics of what can and cannot be done with pointers and references in unsafe code, I think we can safely (no pun intended) say that unsafe semantics are not trivial for anybody :slight_smile:

This is why we have many projects which aim at clarifying these semantics, to which this pre-RFC is only one contribution in an area that was IMO insufficiently explored so far: the role of language semantics and ergonomics in unsafe abstraction design.

Adding pre and post won’t suddenly enlighten all the programmers coming from high-level languages.

...and thankfully, this is not the goal of this pre-RFC. Which is an attempt at clarifying the semantics of the tools that are used to build unsafe abstractions, a rather advanced Rust task which I do not expect complete beginners to engage into.

It seems to me that you are significantly overestimating the problem.

That might be the case. However, the amount of people who are constantly confused about what is perhaps the most dangerous feature of the Rust language worries me.

In the world of life-threatening systems, the more dangerous a tool is, the harder you make it to use it incorrectly. Knives have a handle shaped in such a way that it is pretty clear how you are supposed to pick them up. And power tools are similarly designed in such a way that you are unlikely to have your hand on the cutting bit when you turn them on.

Now, contrast this with Rust's unsafe abstraction vocabulary:

  • Unsafe Rust should never trust Safe Rust's abstractions, yet there is no safeguard in place to lint against reaching for these, in spite of this being a likely habit of the typical Rust programmer (who writes a lot more safe code than unsafe code).
  • Trait's unsafe associated methods are, in current usage, solely about preconditions. All this usage of "unsafe" states is that the method's implementation can rely on a certain property to be true for memory safety, but are not obliged to. Yet the method's entire implementation is automatically turned into an unsafe block, silently enabling all kind of unsafe operations without any compiler warning/lint.
  • TrustedLen is another example of unsafe abstraction design gone wrong. This trait silently changes the semantics of another trait's methods in a way that affects memory safety. Concretely, what this means is that the only thing keeping your unsafe code from doing something dangerous is a single trait bound. If you forget that "TrustedLen" bound anywhere, then you will open a memory safety hole without any warning. This would not have happened if, in an example of more principled abstraction design, the trusted length was provided as a method that is specific to the TrustedLen trait, which in turn would have required a notion akin to the proposed unsafe(post) fn.

It seems to me that there is definitely quite a lot to be done to make unsafe Rust abstractions as easy to use correctly as real-world sharp tools are :slight_smile: And this pre-RFC proposes to start by clarifying what an unsafe abstraction is, and separating its inner components cleanly instead of the ambiguous soup that we currently have where the "unsafe" word is used to mean so many different things.

See above for some examples of highly perfectible unsafe design that you can play with today :slight_smile: As with @gbutler, I think I should again clarify that I personally see two "layers" of unsafe users:

  • People who use the unsafe constructs provided by the Rust language and its standard library. This happens relatively early in the process of learning Rust, for example the first time one tries to optimize array manipulations or do FFI. Doing this only requires unsafe block. I think this is the most mature part of the unsafe ecosystem today, though as @gbutler points out it could certainly use better documentation.
  • People who, in addition, try to build unsafe abstractions which other developers (including future maintainers of the same crate) can rely upon. This is a more advanced topic, where one gets to invoke the other meanings of the unsafe keyword (unsafe fns, unsafe traits, unsafe impls...). These are the uses of the unsafe keyword which I think have unnecessarily unclear semantics, and could benefit from some language-level clarification.

I probably need to reword the pre-RFC in a way which makes it much clearer that I am catering to the latter population here. My wording is obviously not yet clear enough on this front.

1 Like