What does `unsafe` mean?


Sorry for the spam… I fat-fingered the post-button. I edited the previous post some more.


Well, sure? If you hide the unsafety then obviously you don’t have to think about it. I was focusing on the part where you do have to think about it because you’re interacting directly with the unsafe elements.



Maybe I’m just being pedantic, but what I mean is that I would summarize unsafe as follows:

Any time unsafe is on a Trait or a function or a method the implementer must verify it for upholding invariants, and the user must also promise to also uphold invariants by using an unsafe block or impl.

In other words, there is no distinction between if unsafe is on a trait or fn; the implementor and the user both need to uphold invariants, and if they do, the unsafe-ness is hidden from the “user of the user”.


I’m not sure that having a compiler-tracked trusted effect is a good idea.

First, it is 100% OK for trusted code to call untrusted (but safe) Rust code, as long as it doesn’t actually trust any non-type invariants.

Second, trusted means what the documentation says it means, so tagging random functions as trusted does not actually mean anything (contrast with safe, which means that the function follows the Rust type-system).


There already is the possibility of putting unsafe methods into a trait, which has exactly the effect you describe.

So, first of all, you already said yourself above that only unsafe blocks and impls are places where someone promises to uphold invariants. Writing unsafe trait Foo is not a promise to uphold anything, making it very different from e.g. unsafe {...}.

Secondly, unsafe fn happens to permit unsafe code in its body – so there is something to verify for the implementor – but that’s because the body of an unsafe fn is considered an unsafe block. Which actually is not always desirable, and is also the source of this confusion. Imagine a world in which unsafe fn does not make the body of the function an unsafe block. Now we may write:

// Safe to call if x is 0
unsafe fn foo(x: i32) {
  println!("{}", x); // safe, benign action. Nothing to verify here.
  if x != 0 {
    unsafe { *(0 as *const i32) = 0; }

The only place there is anything to verify here is the unsafe block! We have a proof obligation there to show that we will never deref a null-pointer. This is the role of unsafe blocks. Moreover, the unsafe at the function grants us the additional assumption that x is 0. This is the role of the unsafe fn. Together, these two make the function safe. In particular, since the println is outside the unsafe block, we don’t have any obligation to show anything there.

We could also have the even more nonsensical

// Safe to call if x is 0
unsafe fn bar(x: i32) {
  println!("{}", x); // safe, benign action. Nothing to verify here.

where we have nothing to prove at all but still get to make the assumption that x is 0.

I hope this clarifies why we see unsafe fn as introducing assumptions in the callee and obligations in the caller, and not vice versa. The implementor of an unsafe fn does not have to verify it for upholding anything, unless they use unsafe features themselves. Practically speaking, that will usually be the case (otherwise the function may just as well be safe), but I still find it useful to separate the places where assumptions arise (unsafe fn) and the places where invariants have to be verified manually (unsafe {}). Also, many unsafe fn will (somewhat like foo above) contain some code that actually wouldn’t need an unsafe block as the compiler can trivially verify its safety; that code does not need verification.


Yep, I think that’s basically the conclusion I’m coming to.

I wonder if it’s worthwhile to change the syntax in a future epoch. As @eternaleye points out, the fact that unsafe has multiple meanings is really confusing.


Of course, you can do that. The effect T: unsafe Trait has is that it takes an existing Trait and makes all the methods unsafe so that you can say: T: unsafe Clone just as you could (hypothetically) be able to say: T: total Clone.


Ok, going to jump into this. I think the double meaning of unsafe creates additional confusion in this discussion, because of the implications of propagation. Right now, the rule is, “Any function which calls an unsafe function must mask it with an unsafe block or itself be unsafe,” and this looks an awful lot like an effect. But if we separate out the two meanings, and dropped the implicit unsafe { } around unsafe functions, the rule becomes, “Any call to an unsafe function must be in an unsafe block,” which doesn’t seem very much like an effect.


I don’t understand why this line of reasoning is OK tho. Why do you get to drop unsafe { .. }? If you don’t have the masking of unsafe fn, what is the point of unsafe fn even if it does not allow you to build safe interfaces atop of it.


I meant dropping the feature where unsafe fn implies unsafe { }, and requiring it to be explicit in all cases. This would mean that you must always mask a call to an unsafe fn.


We discussed this further at #rust-lang.


I want to summarize my thoughts from that conversation first, but there is one clarification I’d like to get first because I feel like I’m missing something. @eternaleye talked here and on IRC about how the unsafe effect is a real effect because it allows access to unsafe primitives, but then that actually means that you can’t mask unsafety in the general case, as I understand it. Masking requires that the side effects take place effectively in an isolated environment, so that the surrounding code cannot witness the results. This is true for something like Vec, but it’s not true for places where unsafe code has extra mutability powers like mutable statics or interior mutability. Safe code can never cause these mutations, but unsafe code can, and even if you hide the unsafety behind a safe interface as RefCell and Mutex do, the side effects are not actually masked since they can be observed by safe code subsequently.


I wouldn’t say I know precisely what an “effect” means theoretically, but I noticed something different between the “async” and “try” effects and the “unsafe” one: for the former two, you can call a function that returns impl Future or Result then “apply the effect” later with await or ?, but one cannot call something unsafe from something safe then “apply the unsafety” later.

Is that a meaningful difference?


If you were to construct a trait impl that invoked deferred unsafety, then you’d have a similar situation to the “async” and “try” effects. unsafe simply means that the compiler can’t deduce all of the conditions that Rust requires for safe code, so the code author must ensure that Rust’s required conditions hold. I don’t see that the relative timing at which the potential UB occurs is generally an important consideration (though it might be in some circumstances).


You can by marking the whole function unsafe. The orthogonality of unsafe breaks down for unsafe Traits which aren’t “invoked” so there’s no effect. unsafe Traits are almost more compiler enforced documentation than effects.