What does `unsafe` mean?


To continue on this vein, what I believe should be the right way to handle “unsafe” blocks is to put debug assertions directly inside the unsafe block that assert that the pre-conditions necessary to not have UB when invoking the unsafe operation/function (for example, if you are about to de-reference a pointer in an unsafe block, you should have a debug assertion that asserts that it isn’t null) and debug assertions after the invoked unsafe operation/function that verify any necessary post-conditions. This way, it is 100% clear that the programmer who used the “unsafe” at least thought about the pre/post-conditions necessary AND those conditions can be verified in debug mode. In release mode, the assertions go away and so have no overhead. If safety is absolutely critical, then use release assertions instead (and accept the small performance hit).

When deciding to use release assertions for additional safety, first try to formulate an explanation/proof of why it isn’t necessary, and document that in a comment along with the debug assertion. Only if you can’t construct such a satisfactory proof, then, use the release assertions.

That being said, if you can’t formulate such a proof/explanation, then you probably have other issues in your design/implementation that need addressed because, it is the caller’s responsibility to ensure that all pre-conditions to an unsafe operation/function are met prior to invocation. If your code, as the caller, isn’t clear about meeting those expectations, it’s a bug.

I think this is a reasonable way to approach the issue.


I’m not sure that what is needed is design by contract, but I definitely think that it is the logical conclusion of trying to push unsafe to the limits. We should implement whatever is in the right nexus of safety, power, and usability, and it’s far from clear to me that a “menu of contracts” is such a thing.

While some unsafe operations would lend themselves well to debug-mode checks for validity, others would not. Never dereferencing a null pointer, for instance, is such a simple precondition to check that the compiler can even insert that without the user having to do so.

Unfortunately, manually verifying the memory model is nowhere near so easy: we would need a complex sanitizer-level instrumentation, for instance, to ensure that, when casting *mut T to &mut T, the pointer is not aliased by any other safe reference anywhere in the program. This is certainly far beyond anything an author could, in general, write into a debug assertion.

FFI is even worse. FFIs involve calling code about which Rust has absolutely no information. It could always be buggy and corrupt memory in some way that isn’t visible even to the developer calling it. So there will generally be no way for the caller to ensure that, in context, the call is safe.

Encouraging authors of unsafe code to use debug assertions about invariants that they expect within their own code is probably a fine idea, though.


Likewise, every time Design by Contract comes up, I try to ask “what does that actually mean besides some conventions for assertions, or attributes that produce assertions?” and I never seem to get a clear answer. Do you happen to know what people mean by that?

If that’s all it is, then it seems easy to do in a library (at least once we get proc macro attributes for maximum ergonomics), which is why I’m confused whenever people talk about needing to “add DbC” to the language or change the language to “support” DbC. Even the C++20 “contracts” proposal that was recently accepted is basically attributes as assertions.


I don’t know that I know what other people mean by it, but, I would take the stance that it means exactly assertions and conventions for assertions and/or keywords that become assertions with the added caveat that the compiler can, in some cases, omit the assertions from the compiled code (probably when lowering from MIR to LLIR) because it can decide through flow analysis that the assertion will always be true. This seems like an extremely difficult thing for the compiler to do though.


It’s already allowed to do that with with normal assertions.


Here is my understanding of what people usually refer to when they invoke design by contract:

  • Being able to state preconditions on functions, aka properties which must be true at the time where the function is called.
  • Being able to state postconditions on functions, aka properties which the function guarantees to be true at the time where it returns.
  • Being able to state invariants on functions or objects, aka properties which must be true for the entire duration during which a function is called or an object exists.

Preconditions can be checked by inserting an assertion on the site where a function is called or at the beginning of said function. However, such assertions will only trigger at run-time. More advanced contract-based systems are able to prove that most preconditions will be true/false at compile-time instead, which allows 1/more static program checking and 2/optimizing the assertion out.

Postconditions can be checked by inserting an assertion at the point where a function returns, or on the call site right after the call, with the same caveats as for assertion-based precondition checking.

Invariants cannot be checked by assertions, because they model a property which is continuously true, which in a concurrent world of multi-threaded execution and recursive calls is stronger than being true e.g. before or after a fonction is called. Automated invariant checking basically requires a powerful type system that can do compile-time proofs, as checking them at run-time would be prohibitively expensive.

Another nuance between contracts and assertions is that assertions may only be applied to properties which can be checked programmatically (e.g. a number is odd), whereas contracts are a more general notion which also applies to manually verified properties that cannot be checked programmatically (e.g. a C pointer is not dangling).