[Pre-RFC] Another take at clarifying `unsafe` semantics


List of major changes since this was first published:

  • 2018-03-14 @ 22:06 CET – Integrated feedback from @mark-i-m and @eternaleye
  • 2018-03-15 @ 21:37 CET – Integrated some of the feedback from @CAD97.

The many meanings of the “unsafe” keyword in Rust unncessarily reduce its ergonomics and learnability. That point is regularly brought up, two recent examples on this forum being “Ambiguity of “unsafe” causes confusion in understanding” and “What does unsafe mean”. Other threads of discussion also exist in other avenues, including RFC discussions and users forum posts.

So far, most discussions of this problem have stalled, and my understanding is that it is because they ended up with proposals which introduce excessive complexity or churn in the Rust ecosystem compared to their perceived usefulness. For this reason, I would like to propose here a purposely minimal and iterative syntax evolution, which I think would better fit Rust’s stability-focused evolution process, while still fulfilling the intended goals of a clarification of unsafe abstraction semantics.

Current uses of the unsafe keyword

The unsafe keyword is used in several different circumstances, which I believe can be all narrowed down to two fundamental use cases:

  • Unsafe blocks are regions of code where one can take actions that violate memory/type/thread safety, and becomes responsible for maintaining these properties.
  • Unsafe abstractions are abstractions that rely on and/or provide safety-critical guarantees which are not (and usually cannot be) verified by the Rust compiler.

I believe that of those, unsafe blocks are by far the best-understood use case of unsafe in Rust, and can safely (pun intended) remain the way they are, whereas unsafe abstractions is an area that could use improvements. In the remainder of this post, I will thus focus the discussion on unsafe abstractions.

Much has been written about unsafe abstractions in the past, by illustrious minds such as @nikomatsakis and @withoutboats for example. I will not fully repeat these discussions here, but it is sufficient to state that they all revolve around the idea of an unsafe contract which is essential to memory, type and/or thread safety but cannot be checked by the compiler.

More specifically, I think that two ingredients from the theory of design-by-contract are critical to the correct building and understanding of unsafe abstractions:

  • Preconditions are properties which the user of an abstraction must guarantee in order for the abstraction to behave as intended.
  • Postcondition are properties which are guaranteed by the abstraction, and can be relied upon by the user of said abstraction.

The theory of design-by-contract also introduces the third notion of an invariant, which is not currently covered by this proposal, but can be introduced in the future if needed. This notion, and the motivation for not including it in this proposal, will be discussed further in the “Possible extensions” section.

High-level syntax proposal and transition plan

Coming from the above world view about the dual semantics of unsafe, two shortcomings of the existing unsafe abstraction syntax emerge immediately:

  • It fails to strongly expose the notion of an underlying software contract, by being syntaxically too close to an unsafe block, whose semantics are merely “trust me”.
  • It does not mark a clear distinction between preconditions and postconditions, whose role in an unsafe codebase is however fundamentally different.

The way I propose to address these two shortcomings is to add two modifiers to the “unsafe” keyword, “pre” and “post”, which can be used to clarify that the use of “unsafe” is about introducing unsafe preconditions and postconditions in abstractions. Using functions as an example, these modifiers could use the following syntax:

// This function is only safe if some input preconditions are met
unsafe(pre) fn i_trust_you(x: In) { /* ... */ }

// This function guarantees some safety-critical output postconditions
unsafe(post) fn you_trust_me() -> Out { /* ... */ }

// This function has both unsafe preconditions and unsafe postconditions
unsafe(pre, post) fn we_are_bound_contractually(x: In) -> Out { /* ... */ }

As I will detail further down these posts, these modified versions of unsafe aimed at abstraction design will have slightly different semantics with respect to the current “unsafe-for-abstractions” syntax, leveraging the added pre-/postcondition clarification in order to more closely reflect the abstraction author’s intent.

If that functionality is found to be useful, accepted and implemented, then a progressive rollout of the new syntax can easily be envisioned:

  • Start by allowing usage of the modified forms of unsafe in abstractions
  • In the next Rust edition, lint againts usage of un-modified “unsafe” in abstractions
  • In the Rust edition after that, make it an error to use un-modified “unsafe”

Detailed design

Unsafe functions

As mentioned above, unsafe functions and methods can have unsafe preconditions, postconditions, or both. Adding these modifiers alters their semantics as follows:

  • unsafe(pre) functions can cause unsafety if some manually specified preconditions are not met. Therefore, they can only be called from unsafe code blocks. This is the most widespread use of unsafe functions today.
  • unsafe(post) functions provide guarantees which can be relied upon by unsafe code. Therefore, they can only return to their caller inside of an unsafe code block. These functions can safely be used when collecting information that is to be passed to an unsafe(pre) function, such as [T]::get_unchecked().
  • unsafe(pre, post) functions combine both of these semantics: they can cause unsafety if mishandled, and provide output which can be relied upon by unsafe code. A possible example would be a function which receives as a parameter a *const Vec<T> which is assumed to be valid and returns its length, which is then used for unchecked indexing.

Contrary to current unsafe fns, these modified forms of unsafe do not implicitly make the body of the function an unsafe block. The rationale for this is that only the author of the function knows which parts of the function, if any, are unsafe.

In fact, since it is an API-breaking change to add preconditions, we could well imagine abstraction designers proactively adding unsafe preconditions to their functions, but not using them right away, instead starting with a safe implementation instead with the intent of moving to an unsafe one later on. No “accidental unsafety” should result from this API choice, and removing the unsafe precondition later on if it does not prove useful should not be a breaking change. And conversely, adding unsafe postconditions to an existing function should not be a breaking change for users either, as it only makes the function safer to use.

These last two statements reflect an important rule of design-by-contract, which is the Liskov Substitution Principle. That principle can be spelled out in natural language as follows:

It is okay for an abstraction’s implementation to require weaker preconditions than advertised on the outside, or to guarantee stronger postconditions than advertised on the outside. But requiring stronger preconditions or guaranteeing weaker postconditions than advertised is a violation of the abstraction’s contract, and changing a published abstraction’s contract in that direction is thus a breaking API change.

Unsafe traits

Unsafe traits go one step beyond unsafe functions by defining a contract which any implementation of that trait must uphold. Unsafe traits are matched with unsafe impls using the same unsafe modifiers, which are essentially a legally binding statement from the person that is implementing the trait:

I have read and understood the unsafe preconditions and postconditions of this trait. In accordance to the Liskov Substitution Principle, I promise not to rely on any further unsafe precondition, nor to violate any expected unsafe postcondition. I am allowed, however, to have weaker preconditions or stronger preconditions.

Like unsafe functions, implementations of an unsafe traits follow an unsafe contract: anyone who uses an implementation of an unsafe trait that has preconditions must follow its preconditions or face memory/type/thread unsafety. And anyone who uses an implementation of an unsafe trait that has postconditions can rely on them in safety-critical code.

However, this raises the immediate question of what “using a trait” means. As will be discussed in the “Unresolved questions” section, this nut actually proved harder to crack than expected. But after multiple iterations around this concept, the author of this Pre-RFC is now leaning towards the following definition:

A trait is used by adding that trait’s name to a list of trait bounds.

This definition matches the three existing examples of stabilized unsafe traits in the Rust standard library, namely TrustedLen, Send, and Sync, along with a fourth trait which also has safety-critical postconditions, but is not unsafe because a Rust programmer is not allowed by the compiler to implement it incorrectly: Copy.

All of these traits have the following properties in common:

  • They are all marker traits, which do not provide any methods, and are used by being added to a set of trait bounds in generic code.
  • They all guarantee properties which unsafe code is allowed to rely upon, and which could be violated by a careless implementor, resulting in unsafety.
  • It is always safe to use them, they do not have safety-critical preconditions.

From these observations, I would tentatively reach the following conclusions:

  • All of these traits should be marked as unsafe(post).
  • Implementing them should require an unsafe(post) or unsafe marker (see below), as a statement by the implementor that the trait’s postconditions have been upheld.
  • We do not need unsafe(pre) traits at this point in time, only unsafe(post) ones, and the first implementation of this pre-RFC should thus only provide the latter.

Here are some syntax mockups of what unsafe(post) traits could look like:

unsafe(post) trait SafetyCriticalProperty {}

// Syntax option 1: Impls also use the "post" modifier. The rationale
//                  for this syntax is that it is consistent with the
//                  trait's declaration.
unsafe(post) impl SafetyCriticalProperty for MySafeType {}

// Syntax option 2: Impls do not use the "post" modifier. The rationale
//                  for this syntax is that it is consistent with unsafe
//                  blocks, which are about _doing_ something unsafe, and
//                  we are indeed doing something unsafe by implementing
//                  an unsafe(post) trait for a type.
unsafe impl SafetyCriticalProperty for MySafeType {}

Finally, like any trait, unsafe(post) traits can have methods. These methods can rely on the host trait’s postconditions, but if they play a critical role in ensuring said postconditions, then they should be marked as unsafe(post) as well to clarify this point to implementors and users. Therefore, and in contrast with previous versions of this proposal, I am not proposing anymore that methods of an unsafe(post) trait be implicitly marked as unsafe(post).

Unsafe associated methods

Like all methods, trait’s associated methods can be unsafe(pre) or unsafe(post). In the context of trait definitions, this sets a precondition which every implementation of that method can rely on, or a postcondition which every implementation of that method must guarantee.

For example, let us assume that we lived in an alternate universe where the way in which TrustedLen modifies the semantics of Iterator::size_hint() by adding an unsafe postcondition to it would have been frowned upon as “spooky unsafe action at a distance”. How could we have handled that differently? One possibility would have been to explicitly add a new Iterator method which upholds the required unsafe contract:

// There is really nothing unsafe about implementing this trait...
trait TrustedLen: Iterator {
    // ...as long as this dangerous method is correctly implemented.
    // (Because it is dangerous, we need an unsafe block to do so)
    unsafe(post) fn len(&self) -> usize;

    // Unlike len(), this method promises nothing and is thus not unsafe.
    // Reimplementing it as (len % 3) == 0, although evil, should thus not
    // mislead TrustedLen users into performing unsafe actions.
    fn len_is_even(&self) -> bool { (len % 2) == 0 }

Possible extensions

Unsafe invariants

In addition to preconditions and postconditions, design-by-contract introduces a third concept called invariants. Invariants are properties which are expected to remain true throughout the entire duration for which an abstraction is used.

Contrary to what was stated by a previous version of this proposal, invariants cannot be fully reduced to a precondition/postcondition pair which applies to each abstraction entry point, because such a reduction only makes a statement about properties which must hold true before and after the time where an abstraction is used, and not during that time. The difference between an invariant and a precondition/postcondition pair can be observed if an abstraction is concurrently used by multiple pieces of code, for example in the case where a Sync type with unsafe methods is shared by multiple threads.

I do not currently see a benefit in writing down the notion of an unsafe invariant in Rust code. It does not seem to introduce new constraints on the way rustc should process unsafe abstractions with respect to unsafe(pre, post), because it makes a statement about properties which must be true during the time where an abstraction is used, and Rust has no syntax for that. So in my view, this can probably remain a concept that is solely exposed through manually written documentation, at least initially.

However, should I be proven wrong, the proposed syntax could naturally be extended to encompass this concept by adding an unsafe(invariant) modifier, which implies unsafe(pre, post) and extends it with a third notion of the unsafe property’s validity extending throughout the entire duration for which the unsafe abstraction is used.

Unsafe types

Much like unsafe functions can exist in isolation, one could envision allowing for “unsafe types”, which follow an unsafe contract like an implementation of an unsafe trait.

// No matter how the definition of this type changes in the future,
// we guarantee that it will be forever possible to transmute pointers
// to Sized types into this type.
unsafe(post) struct OpaqueSizedPtr(usize)

In this case, every method of an unsafe(pre|post) type should probably be implicitly declared unsafe(pre|post), because is subjected to the type-wide preconditions and postconditions.

However, I would advise against requiring unsafe(pre|post) on the impl blocks of unsafe types for two reasons:

  • Not every type definition comes with an impl block. For example, type A = B doesn’t.
  • Impl blocks are allowed to target multiple types in Rust, and authors of blanket impls should not need to consider whether their impls may cover unsafe types or not.

Because of this required syntax inconsistency, and because a need for unsafe types has not emerged so far in Rust’s history, I would personally be against allowing for unsafe types unless a strong rationale for them is provided.

At the same time, however, I can see how they would “close the loop” of unsafe abstractions by allowing every Rust abstraction to have an unsafe contract attached to it. So I could be convinced that they are a worthy addition. Should they turn out to be wanted, the above mockup should be enough proof that the proposed syntax is flexible enough to allow for them.

unsafe(pre) traits

Although that could be considered an overall improvement to its consistency, this proposal does not suggest allowing for unsafe(pre) traits initially, for the following reasons:

  • The need for them has never arisen so far.
  • They would have remarkably confusing syntax and semantics.
  • Their simplest use cases seem to be fulfilled by unsafe(post) trait bounds.

According to the definition that was given above, an unsafe(pre) trait would be a trait for which the very action of adding that trait as a trait bound to generic code could cause memory/type/thread-unsafety if some safety-critical preconditions are not met.

It is very hard to envision a situation in which this would be the case. And in fact, we do not even have syntax for this as of today (unsafe markers in trait bounds, anyone?). What we do have today, however, is a way for a trait to rely on safety-critical preconditions from the underlying type, and that is actually simple: just put a trait bound on an unsafe(post) trait which guarantees said preconditions as a postcondition:

trait ImplMustBeSync: Sync {}

Given these considerations, I cannot think of any circumstance in which we would want to add unsafe(pre) traits to our unsafe abstraction vocabulary. But should they turn out to be actually needed someday, the syntax for them can again easily be made available.

Extended static analysis of unsafe contracts

The clarification of unsafe contracts that is proposed by this pre-RFC could be leveraged to build static code analyzers which lint some suspicious constructs in unsafe code.

The simplest form of analysis that could be done would be to assert that the documentation of an unsafe abstraction features a “Safety” section, in which abstraction authors are expected to precisely state the unsafe preconditions and postconditions which the abstraction builds upon.

Going one step further, more elaborate static analysis could also be used to assert that any input which is fed into an unsafe(pre) abstraction emerges from an unsafe block or from an unsafe(post) abstraction. Following this discipline would arguably smooth out the interaction between unsafe blocks and unsafe(post) contracts. However, there are some roadblocks that would need to be overcome before this dream may come true:

  • There should be a way to annotate individual inputs of unsafe abstractions in order to clarify which of these inputs are concerned by the unsafe preconditions and which aren’t. Otherwise, false-positives would emerge from incorrect linting of information that is passed into non-safety-critical inputs.
  • Unsafe traits like TrustedLen which turn a non-unsafe(post) function into an unsafe(post) one would need to be deprecated, as these would require manual rule injections in the static analyzer in order to be properly handled.

Unresolved questions

Exact meaning of “using a trait”

During the development of this pre-RFC, pinning a precise meaning on the expression “using a trait” has turned out to be surprisingly difficult. Successive versions of this RFC started by associating using a trait to using its public interfaces (associated fns/consts/types/fields…), then begun to acknowledge the importance of trait bounds, before the current definition of “using a trait is adding it to a set of trait bounds” was proposed.

The difficulty of pinpointing this meaning should be considered by reviewers of this pre-RFC as a warning sign that this part of the pre-RFC is not very solid yet, and reviewers are thus strongly encouraged to investigate this matter on their own, come up with alternate definitions of this sentence, and propose them to the author. A more refined understanding of traits as an unsafe abstraction will likely result from this process.

Prior art

Design-by-contract has a long history. Its notions were first popularized by Eiffel, and then remained with us throughout the subsequent history of programming, being for example used in many modern formulations of the Liskov Substitution Principle.

One particular use of design-by-contract that was strongly influential on the author of this proposal was the integration of contracts in the Ada language as part of its Ada 2012 revision, and its justification in the accompanying design rationale document from John Barnes.

Formalizing Rust’s unsafe code in terms of software contracts also has a long fruitful history, of which the links above only identify the most recent events. Previous discussions have already let to proposals of introducing more contract-centric semantics to unsafe abstraction building blocks, though none of these made it to the implementation stage so far.


  1. Do nothing. Unsafe abstractions will remain confusing to implement and review, making unsafe Rust hard to learn and causing memory, type and thread safety issues that could have been avoided by improving unsafe code ergonomics.
  2. Turn this minimal proposal into something bigger, such as a fully generic effects system or an in-depth integration of design-by-contract into the Rust language. Prior discussions have shown that this is not perceived to be a reasonable price for a clarification of Rust’s unsafe abstraction semantics, and is likely to be rejected. Therefore, this path seems to eventually lead to the “do nothing” alternative.


This pre-RFC proposes some syntax and semantics additions which have, I think, the potential to make the “unsafe” keyword much less confusing when used for the purpose of building unsafe abstractions.

The proposed changes are designed to be minimal, and to allow for coexistence with existing code and iterative migration of said code. No existing semantic of “unsafe” is changed, all changes are purely additive and only meant to be ultimately enforced via the progressive linting process that was also discussed in the context of dyn Trait.

By implementing these changes, we can cleanly separate the currently confusing use of unsafe into three orthogonal concepts:

  • Unsafe code, which is allowed to break safety and responsible for not doing so
  • Unsafe preconditions, which must be upheld to guarantee the safety of an abstraction
  • Unsafe postconditions, which unsafe code is allowed to rely upon from an abstraction

Only the two latter uses of unsafe require (long-term) syntax and semantics changes in this proposal, and I believe that they are the minority when it comes to use of unsafe in current code, with unsafe blocks dominating by far. Moreover, the proposed semantics changes generally go towards expressing concepts which unsafe abstraction authors could generally only state in more confusing ways before. So I think that they should be received positively overall.

By clarifying the meaning of “unsafe” in an abstraction context, new forms of unsafe abstractions are also naturally enabled (such as freestanding unsafe(post) functions), which should naturally go to enrich the abstraction vocabulary of the unsafe Rust programmer.

So, what do you all think about this idea? Is there something important which I overlooked? A problem which I did not envision? Something that I should change? Or should I proceed towards turning this idea into an actual RFC?


I am pretty sure this is the same mistake I was making and is addressed by @eternaleye here (see the code sample in that post). The key is that an invariant should be true throughout the whole execution, whereas a pre/post condition does not have to be.

While I agree with the overall motivation, I respectfully disagree that this is the right approach… Rather, I think that the right approach is to do a better job at teaching about unsafe. When I was learning Rust, I was never unclear about when to use an unsafe block. In fact, I was never confused about unsafe until I had to start designing interfaces that use unsafe. At that point the question arises: “Should interface X be unsafe?”. Perhaps there is a syntactic change that would make that more clear, but I don’t know what it is.


You missed a beautiful opportunity here:

// This function has both unsafe preconditions and unsafe postconditions
unsafe(pre, post) fn we_are_bound_contractually(x: Input) -> Output { /* ... */ }

But also, yes, invariants cannot be built from preconditions and postconditions alone in the presence of any form of concurrency over state, including “calling other functions”.

EDIT: I feel I better explained why this is the case in this Reddit thread - I’m /u/vitnokanla there, as eternaleye was already taken.


Ah yes, you are both right! That’s an especially silly mistake for me to make, since I’m a heavy writer of concurrent data structures, and keeping invariants upheld at all times is a big part of that. I will update the text accordingly later today :slight_smile:


One other thing to note - the Sync trait has no methods. As a result, your equivalence relation between an unsafe(pre) trait and a safe trait with all methods marked unsafe(pre) falls apart, as it would allow a safe impl of Sync.

In addition, your equivalence relation is already broken: you describe modeling “additional” preconditions with using both markers, but if desugared using the equivalence, this loses information.


That is certainly a possible approach, and if I end up translating this to an actual RFC, I will obviously add a “do nothing” alternative.

At the same time, I can’t help noticing how you tripped up on the exact same thing which I would like to improve, namely “unsafe” having an unclear meaning in interfaces. With this proposal, the role of unsafe in interfaces would be clearer, as it would be spelled out in terms of a software contract. You need unsafe(pre) if your safety relies on manually checked properties, and you need unsafe(post) if you are making a statement that others can rely on you for safety.

Teaching can always be used as a band-aid for low-usability language ares, but sometimes improving the language clarity can be a better long-term investment, if it does not come at too high a cost. I see a strong parallel with dyn Trait here: people often get tripped up with types and trait objects behaving differently since they have extremely similar syntax. Through syntaxic clarifications, we may be able to eliminate this common confusion entirely in the long run.


As a minor nitpick, Sync is unsafe(post) (it’s implementors which are doing something unsafe by implementing it, not users which are doing something unsafe by using it), but I see what you are heading towards here.

Indeed, there should probably not be an equivalence here, but only an implication. If a trait is unsafe(pre/post), then it implies that every interface of that trait is unsafe(pre/post). But stronger semantics are attached by making the trait itself unsafe, which is what allows marker traits to work. Another thing that I should clarify in a future version. Gotta run for now :slight_smile:


@mark-i-m @eternaleye I have now integrated your feedback in the proposal. Compared to the version which you have read, the new revision includes:

  • An improved and deeper discussion of unsafe invariants.
  • A fully rewritten discussion of unsafe traits, which puts more focus on what implementing an unsafe trait entails and how these differ from traits with unsafe methods.
  • A dedicated section on traits with unsafe methods, what this means from a contractual point of view, how it differs from unsafe traits and unsafe functions.
  • A brand new section on possible extensions of this design, which can be envisioned but are not currently proposed as part of this Pre-RFC
  • A brand new section on some prior art which the author found inspiring (this takes a lot of time to collect, better start early).
  • Better code mockups (less redundance, more jokes).
  • Many wording and stylistic fixes.

Can you have a look and tell me if you have any other feedback?

EDIT: More minor additions made since then:

  • Further wording tweaks (e.g. trait method -> associated methods)
  • More details on allowed unsafe fn contract changes (Liskov strikes again!)
  • Added a first take on an “Alternatives” section.


Can a trait even have an invariant that it, as a trait, requires to hold? I suppose maybe we could envision a world where Copy is an unsafe(invariant) that the type is always able to be cloned by copying its stack representation, or Send/Sync are because they always need to have that property, there’s no pre- or post- involved.

(Along those lines, is there any case for unsafe(pre) trait? I agree that it is conceptually different from unsafe(pre) on every fn, but how exactly can the trait as a whole have any preconditions?)

unsafe(invariant) might make sense for a method on a concrete type, where a publicly-settable member of the structure needs to be within some bounds for the method to be safe. pre is too weak: you can imagine a situation where the method is invoked with a safe state, then immediately the context switches to another thread which invalidates that precondition (if it’s based on interior mutability). But in practice, is there any case where this publicly-settable member being outside its safe range is not a logic error?

With that said, here’s a half-baked counter-proposal. Some of it could be dialed back to a more incremental approach; however, it all is included to paint what I see as the full picture. I’ve also always used unsafe(condition) for everything but unsafe { } for maximum explicitness but you could also theoretically not require the (condition) where only one condition may ever be present.

  • unsafe { /* code */ }: allows fulfillment of unsafe contracts, as today.

  • unsafe(pre) fn as described in the pRFC: the parameters meet some contract when the fn is called (beware interior mutability!)
  • unsafe(post) fn as described in the pRFC: the return value meets some contract that unsafe code may rely on for memory safety.
  • unsafe(invariant) fn: the parameters (this includes self) meet some contract for the entire execution duration of the fn. This is equivalent to unsafe(pre) modulo internal mutability.

  • unsafe(pre/post) trait: doesn’t seem to make sense: before and after what, exactly? They could imply unsafe(pre/post) on all inherent associated fn, but then I think that’s what they’d mainly be used for
  • unsafe(invariant) trait: the implementation of the trait itself means something about the concrete type. This is (I think) any unsafe marker trait that exists in the standard library today.
  • Samely-qualified unsafe must be present on the implementation for the trait.

  • unsafe(pre/post) struct: ??? what would this mean ???
  • unsafe(invariant) struct: members of this struct must meet some invariant. This means that modifying them is unsafe. (You could also potentially apply unsafe(invariant) to individual members for better granularity.) impl blocks are still safe.

A dialed back version of this would use unsafe(pre/post) for fn, and use bare unsafe for trait (and optionally struct) as unsafe(invariant). You could emulate unsafe(invariant) in unsafe(pre) in the face of interior mutability by taking by value or unique reference, or making one of the preconditions further stronger non-aliasing guarantees.

EDIT(like five seconds after posting): I realize that this fake unsafe(invariant) doesn’t work in the face of mutable globals, because I like to pretend those don’t exist :upside_down_face:


One challenge which I have with unsafe traits is to put a precise definition on the notion of using a trait from the point of view of unsafe contracts. I could definitely use some help in this area. Here is where I got so far:

  • Invoking one of the trait’s associated methods? That clearly gets close to invoking an unsafe function in my eyes, and is the way I understand it in that RFC.
  • Somehow touching the trait’s other associated items, like associated types, consts, or maybe fields someday? I do not currently discuss that at all. I’m not sure if there is a use of these that can directly make or break an unsafe contract. Review from a more experienced language lawyer would definitely be much welcome here!
  • Using the trait bound itself as a marker of an unsafe contract? This is how existing examples of unsafe marker traits like Sync and TrustedLen work. All current examples are about setting postconditions. If someone could think about a case where the mere act of adding a trait bound relies on unsafe preconditions, then we would somehow need to put trait bounds in an unsafe block, and that would easily get very messy, so I’m not sure if we want to allow for this.

That last realization about what unsafe(pre) trait could mean to unsafe abstraction authors is disturbing. I’m not sure if I want to allow for it anymore. More generally, I should probably add an “unresolved questions” section to the proposal which spells out the troubles which I have with defining “using a trait” precisely.

From this mental perspective, I think Copy is again an unsafe postcondition. There is nothing unsafe about adding a Copy trait bound to your generic code, or about using it by “moving” values to copy them. And implementing Copy comes with unsafe postconditions because unsafe code can rely on it working as expected. What is unsafe is the act of implementing Copy for a type, because we are stating a property which unsafe code relies on.

I would have a lot more to reply to your comments, but for now I’m out of time :expressionless:


I have published a new version of this proposal which integrates some of the above discussion. Here are the main changes:

  • Moved discussion of the Liskov Substitution Principle earlier, in the “Unsafe functions” section.
  • Rewrote the discussion of unsafe traits again, this time by understanding “using a trait” as “adding that trait to a list of trait bounds”. This turns out to dramatically simplify the discussion, while still correctly reflecting all current usage of unsafe traits in the stable portion of the standard library.
  • Removed unsafe(pre) traits from the proposal, and explained why in the “Possible extensions” section.
  • Added an alternate syntax proposal for impls of unsafe(post), and clarified what are the respective merits of both proposed syntaxes.
  • Removed requirement that methods of unsafe(post) traits be implicitly unsafe, which does not match the new definition of “using a trait” above.
  • Deeply rewrote the discussion of unsafe invariants in an attempt to clarify it.
  • Described a possible application of the proposed syntax clarifications to more in-depth static analysis of unsafe code, which would be able to lint against suspicious use of unsafe which is likely to be incorrect.
  • Added a discussion of the “using a trait” definition problem.


And now, back to the discussion with @CAD97 :slight_smile:

This does not match the definitions which are used in the current proposal.

The current proposal defines an unsafe abstraction to be an abstraction which introduces an unsafe contract between its implementor and its user. In the context of a trait, the implementor is the one who writes an impl block for that trait, and the user is the one who adds the trait to a list of trait bound.

Building on top of this, the proposal states that an unsafe invariant is a property which is guaranteed to be true for the entire duration during which the abstraction is used. Which implies, in particular, that the invariant is both a precondition and a precondition of the abstraction (though as was discussed before, the reverse property does not hold).

This means, in turn, that it is incorrect to say that Copy and Send/Sync have unsafe invariants, because these marker traits do not have unsafe preconditions. There is no circumstance under which it would be unsafe to rely on a correct implementation of these traits. All they do is to provide extra guarantees which unsafe code can rely on, which matches my definition of an unsafe postcondition.

If you believe that the definitions of the current proposal are incorrect or suprising, please share your ideas of terminology improvements with me! :slight_smile:

You are right that this notion is fishy after all. I have removed it from the latest version of the proposal, and added a subsection which goes into more details of what it could mean in the “Possible extensions” section.

As I state in the “unsafe invariants” section of the proposal, I certainly agree that there is such a thing as a notion of an unsafe invariant, which is a superset of both unsafe preconditions and postconditions (we expect the invariant to be true in the beginning, to still be true at the end, and then some).

My rationale for not initially adding invariants to the proposal (but leaving room for adding them in the future) is the following question: do we need to spell out the notion of an unsafe invariant in code? Or, from an alternative perspective, can rustc’s behaviour be influenced in a beneficial way by making it aware of the existence of an unsafe invariant?

The motivation for this pre-RFC is that there is a clear benefit in exposing notions of unsafe preconditions and postconditions in code, and marking the distinction between those at the compiler level. Using functions as an example, unsafe preconditions mean that users need to be careful, which translates at the language level into a need to wrap the function calls in unsafe blocks, whereas unsafe postconditions mean that implementors need to be careful in order to preserve safety, which translates at the language level into a need to wrap the return statements in usafe blocks. This is a useful distinction, which allows usage of unsafe blocks in code to be dialed down to exactly as little as needed when compared with current usage.

In contrast, it is less clear to me at this point in time how exposing the notion of unsafe invariants at the language level could be used in order to alter the way rustc processes code in a way that clarifies the semantics of unsafe abstractions and makes them easier to implement correctly. One obvious consequence would be that the entire implementation of a function with unsafe invariants should be wrapped in an unsafe block (as with today’s unsafe fns), but should that really be considered beneficial?

As discussed above, adding an unsafe function invariant probably means that the entire function body should be considered unsafe, as any action which the function’s code takes may accidentally break the unsafe invariant. In that sense, any code construct inside of the unsafe(invariant) function’s body becomes unsafe, including constructs which would normally be considered safe.

As discussed above, I now agree with you concerning unsafe(pre) trait, but understand existing uses of unsafe marker traits in the (stable) standard library to be about unsafe postconditions, not invariants. If you disagree about the underlying definitions from the proposal or about the rationale which together lead me to reach these conclusions, please tell me more about why.

As with traits, there is a definition problem here: what does “using a type” mean? Instantiating it, perhaps? Or perphaps manipulating the type or its members, directly or indirectly via methods? And assuming that we wanted unsafe types, how should we annotate the associated impl blocks? (see proposal)

Since unlike with traits, we have never had nor felt a need for unsafe types so far, I have decided that instead of banging my head on this question, I should probably just refrain from adding a notion of unsafe type to the language. This keeps the proposal as minimal as possible, and thus increases its odds of being eventually accepted.

Nevertheless, the current version of the proposal features some tentative discussion of what an unsafe type would be, and how it could be added in the future as an extension of this pre-RFC. See the “unsafe types” section.

Can we even attach nontrivial unsafe contracts to types whose members are all public? I believe there is a reason why current attempts at unsafe coding styles strongly suggest putting unsafe abstractions in a dedicated code module: there are not a lot of unsafe (or even safe) abstractions which one can build on top of a type with public members…

I believe this comes back to the discussion of unsafe(invariant) vs unsafe(post), so I will wait for you to reply to this one.