What does `unsafe` mean?


#21

Sorry for the spam… I fat-fingered the post-button. I edited the previous post some more.


#22

Well, sure? If you hide the unsafety then obviously you don’t have to think about it. I was focusing on the part where you do have to think about it because you’re interacting directly with the unsafe elements.


#23

@Lokathor

Maybe I’m just being pedantic, but what I mean is that I would summarize unsafe as follows:

Any time unsafe is on a Trait or a function or a method the implementer must verify it for upholding invariants, and the user must also promise to also uphold invariants by using an unsafe block or impl.

In other words, there is no distinction between if unsafe is on a trait or fn; the implementor and the user both need to uphold invariants, and if they do, the unsafe-ness is hidden from the “user of the user”.


#24

I’m not sure that having a compiler-tracked trusted effect is a good idea.

First, it is 100% OK for trusted code to call untrusted (but safe) Rust code, as long as it doesn’t actually trust any non-type invariants.

Second, trusted means what the documentation says it means, so tagging random functions as trusted does not actually mean anything (contrast with safe, which means that the function follows the Rust type-system).


#25

There already is the possibility of putting unsafe methods into a trait, which has exactly the effect you describe.

So, first of all, you already said yourself above that only unsafe blocks and impls are places where someone promises to uphold invariants. Writing unsafe trait Foo is not a promise to uphold anything, making it very different from e.g. unsafe {...}.

Secondly, unsafe fn happens to permit unsafe code in its body – so there is something to verify for the implementor – but that’s because the body of an unsafe fn is considered an unsafe block. Which actually is not always desirable, and is also the source of this confusion. Imagine a world in which unsafe fn does not make the body of the function an unsafe block. Now we may write:

// Safe to call if x is 0
unsafe fn foo(x: i32) {
  println!("{}", x); // safe, benign action. Nothing to verify here.
  if x != 0 {
    unsafe { *(0 as *const i32) = 0; }
  }
}

The only place there is anything to verify here is the unsafe block! We have a proof obligation there to show that we will never deref a null-pointer. This is the role of unsafe blocks. Moreover, the unsafe at the function grants us the additional assumption that x is 0. This is the role of the unsafe fn. Together, these two make the function safe. In particular, since the println is outside the unsafe block, we don’t have any obligation to show anything there.

We could also have the even more nonsensical

// Safe to call if x is 0
unsafe fn bar(x: i32) {
  println!("{}", x); // safe, benign action. Nothing to verify here.
}

where we have nothing to prove at all but still get to make the assumption that x is 0.

I hope this clarifies why we see unsafe fn as introducing assumptions in the callee and obligations in the caller, and not vice versa. The implementor of an unsafe fn does not have to verify it for upholding anything, unless they use unsafe features themselves. Practically speaking, that will usually be the case (otherwise the function may just as well be safe), but I still find it useful to separate the places where assumptions arise (unsafe fn) and the places where invariants have to be verified manually (unsafe {}). Also, many unsafe fn will (somewhat like foo above) contain some code that actually wouldn’t need an unsafe block as the compiler can trivially verify its safety; that code does not need verification.


#26

Yep, I think that’s basically the conclusion I’m coming to.

I wonder if it’s worthwhile to change the syntax in a future epoch. As @eternaleye points out, the fact that unsafe has multiple meanings is really confusing.