What does `unsafe` mean?


@scottmcm: Thanks for the links!

@alercah: I was only using capacity as an example – naturally, all 3 fields would be marked unsafe.

No, this is exactly the opposite of my idea – marking them unsafe would be exactly what allows you to not verify the invariants at runtime.


No, this is exactly the opposite of my idea – marking them unsafe would be exactly what allows you to not verify the invariants at runtime.

Err, yes, I was referring to the safe fields. Any interaction between safe and unsafe fields would need to have invariants checked. Only invariants relating to exclusively unsafe fields would be required to be checked.

Ultimately, I don’t think that there’s in principle a reason that we couldn’t introduce a proof obligation like this here. It’s just that experience has shown it probably wouldn’t be that valuable.


Why not?

To me, most of what you’re saying is the standard argument why the privacy boundary must be a key part of enforcing memory safety. Whether a module with unsafe code is sound or not depends partially on what APIs it chooses to expose, and afaik that’s a fundamental inevitable fact, not something we could change by repositioning the unsafe keyword.

My understanding of past proposals for variations on unsafe fields or other unsafety syntax was that their motivation was either ergonomics (the easier it is to read/write unsafe code, the easier it is to get it right) or enabling the compiler to check for some subsets of unsound behavior, not to replace or modify the “base case” of privacy enforcing soundness. So I find it surprising and unusual that you think the idea of the privacy boundary enforcing unsafety is itself questionable, and I’d really like to dig into that if you can elaborate on what you meant by it.


@alercah, @Ixrec: What both of you are saying makes sense. And the “unsafe fields” concept doesn’t take care of everything, like the Cell example from Pre-RFC: Unsafe Types or the fact that Vec doesn’t depend only on its fields, but also what memory has been allocated (although I suppose freeing the memory is also unsafe?).

I think I have 2 main reasons to look askance at using module boundaries for safety:

  • It means anything in the same module is also unsafe (in the sense of promising to uphold guarantees), even if it’s not marked that way – I would be more comfortable reading the code if it was all marked explicitly, even if that was more verbose.
  • Ergonomics – in order to limit what code falls in the same module as the unsafe stuff, I might want to make more submodules than usual, which could involve writing a lot of boilerplate (e.g. re-exports and accessor methods). I’ve always found splitting Rust code into submodules to be a bit inconvenient for my taste.

Thinking about it, neither of these problems seem insurmountable, but I’m not sure how I would make the overall system feel clean enough to me.


I still think unsafe fields/structs are a good idea, but they should not replace the privacy boundary. They should put additional spotlight on the places where those fields are mutated. So I actually think only private fields should be allowed to be unsafe.

This could also have effects like, e.g., not deriving auto-traits once there is an unsafe field. See this post for why that might be a good idea. On the other hand this might make auto traits much less useful, so I am somewhat torn here – generally, the interaction between auto traits and private invariants is not nice.


As a small point, it seems to me that public unsafe fields are entirely compatible with keeping the privacy boundary. If the privacy concept is “a module may not publicize a way to cause memory errors without using an unsafe block”, and modifying an unsafe field requires an unsafe block, there’s no contradiction. I don’t see a strong reason to allow public unsafe fields when you can just implement safe getter methods and unsafe setter methods, but I don’t see a reason not to, either.


Ok, given some discussion on Discord, I think it’s time to look at the idea of a trusted impl T for Ord.

This idea is that a trusted implementation of Ord fulfills the extra invariants of Ord, but to the point where unsafe code can rely on it. Now, when writing your Ord implementation, you must be careful, but the result is that unsafe code is free to optimize in such a way that you get nose demons if your implementation breaks the invariants. This is in contrast to right now, where unsafe code has to assume that trait implementations can do absolutely anything safe Rust can do, which is basically anything other than cause memory unsafety directly. Notably it includes not following any of the invariants, not even the ones that default implementations meet (since they can always be overridden).

In principle, there is no reason this needs to be related to generic code. trusted fn in general could simply mean “there are additional invariants which this function promises to uphold, on pain of UB”. But on a simple standalone function, this isn’t actually very meaningful: it doesn’t inherently document which invariants are promised, so the caller trying to rely on them needs to look at the documentation to see what the invariants are. But the actual trust situation ends up being no different than for a regular function, because the author of the unsafe code still knows exactly what their calling and is responsible for it; trusted might as well just be documentation.

On a trait, however, there can be invariants defined by the trait itself, and then a trusted implementation is saying “I meet my trait’s invariants on pain of UB”. This includes ensuring that it behaves exactly as described, including every method and the relationship between them (e.g. for trusted PartialOrd, that the various operator functions correspond exactly with cmp). The key difference is that, in this case, the unsafe code does not know anything about the implementation by default, and trusted tells it that there are things it can rely on.

This made me think about higher-order functions: what if, rather than traits, we had the idea of a trusted closure, when a trait is too unwieldy. And this in turn made me realize that trusted isn’t really about code at all, because functions are just data, and there’s no real reason that you couldn’t apply trusted to other data as well, by encoding any invariants in a type that isn’t used in a bound (imagine trusted SortedVec<T>). The difference is merely whether the invariant is necessarily upheld for an entire type (the Ord case) or merely for individual values (the SortedVec case), but note that dyn trusted Ord already blurs these lines. And, importantly, both of these cases allow a proof obligation to be moved because safe code can talk about values meeting certain invariants without necessarily being unsafe itself.

While that example encodes the invariant into the type, it could apply to data meeting constraints specified by the function as well. For example, if a function wants to be passed a valid index or risk UB, we could write fn foo<T>(v: Vec<T>, ix: trusted usize) rather than unsafe fn foo<T>(v: Vec<T>, ix: usize). This doesn’t provide much benefit in terms of moving proof obligations, but it does reduce their scope: the first foo is clear directly from its type that it can accept any value of v, any T, and any global state, and that the only special thing that a caller needs to do is promise that ix is valid for whatever foo's definition of valid is. The second foo, on the other hand, could have arbitrary restrictions on any part of the program.

There’s more to come later, but this seems like a good stopping point and I need to get going for now. I will say that my instinct tells me that trusted SortedVec<T> is perhaps useful, but trusted usize is not likely to be.


If I strawman that differently as impl unsafe Ord for T (not unsafe impl Ord for T, which is different), it makes me think of impl const Ord for T, as another case of (uncarried) effect polymorphism for traits.

Though I think this would probably want a more precise way of saying exactly what those requirements are than we have today.


@alercah Dependent types?


@scottmcm That’s a fair point; I don’t feel that I yet understand things well enough to decide on effect-ness yet, though. :slight_smile:

@Soni no, they’re not dependent types, because the proofs are not checked by the compiler. trusted SortedVec promises to be sorted, but an unsorted one can still exist. It just might cause UB.

The problems with trusted values

Hopping back, I realize I made a mistake above with fn foo<T>(v: Vec<T>, ix: trusted usize): usize has no invariants so this doesn’t mean much. But you could make the parameter trusted ix: usize or something for the intended effect.

Anyway, where does this put us?

There’s a big gap between trusted on a type or trait, and on a paramter, because the former imply guarantees that exist independently of context. It makes sense to pass around a trusted SortedVec<T>, for instance, because the obligation is continuous: it must always be met for as long as the value exists. And because other code could do anything (even unsafe things) while the value exists, there is very little that can be promised that relates to the outside world. Likewise, trusted Trait can’t promise anything about program state or even assume that values of the type meet any invariants (unless we had something like impl trusted Ord for trusted T).

On the other hand, the trusted parameter is quite different. Its obligation is instantaneous and entirely contextual. It isn’t something that you can pass around, because other functions may not impose the same obligations. Moreover, the obligation may extend outside the parameter; in the above example, for instance, v plays a role in determining whether ix is valid. You could just as easily mark v instead of ix. So this idea isn’t super useful. I thought of some ways to stretch it further and try to encapsulate the rest of unsafety, but they rely on pedantic notions of, say, whether global state counts as a parameter, or whether you have to consider an unsafe fn to be a trusted parameter to the function call operation, and it gets messy quick.

Interestingly, this also tells us that trusted fn is not very meaningful, per our above intuition, because fn has no invariants. So it is just a trusted value if used alone.

Loose ends on trusted types/traits

A few remaining points on trusted types.

How do we create a value of type trusted T? This is a runtime operation which asserts a proof obligation, so it must be unsafe. If we actually want to implement this in the language, then &T -> &trusted T, T -> trusted T, and &mut T -> &mut trusted T are all valid conversions, but are unsafe because they require that you prove that the invariants hold. Once created, you can safely go from &trusted T to &T, and trusted T to T (possibly as coercions). You cannot, however, go &mut trusted T to &mut T, since otherwise you could do:

let trusted_t = unsafe { T{} as trusted T };
(&mut trusted_t as &mut T).break_invariant();
// now trusted_t is a trusted T but its invariants are broken

Likewise, any mutation to a trusted T is also unsafe. A type could provide methods that operate on a &mut trusted T and mutate it in a way that preserve invariants, however (more on this in a sec).

It may be worth prohibiting trusted T where T is built-in, and adding an attribute indicating that a type cannot be trusted, or conversely requiring a type opt-in to accepting trusted. This would cut down on mistakes since, for a large number of types that do not have invariants, trusted T is basically a T with unsafe mutation.

For creating trusted Trait types, impls are only created in one place you can do that, and are immutable, so it becomes trusted impl Ord for T or what have you. Note, however, that this is not actually too different from unsafe impl. This makes sense, because unsafe trait today is, under this model, a trait which does not admit non-trusted implementations and is otherwise the same. The only difference is bikeshedding the keyword, and the evidence that some traits will always require this (or, alternatively, we would need to replace Sync with trusted Sync literally everywhere, to no benefit). Strictly speaking, we might want unsafe impl trusted Trait for ... to emphasize that trusted is a property of the trait, and unsafe is the proof discharge indicator; this seems mostly like bikeshedding except that I’ll now refer to Send as a trusted trait as well.

There should clearly be a relationship between trusted Trait and Trait as types; probably a subtyping one. Doing this naively could run into problems, but I think the technology that @centril et al came up with for effect polymorphism works here (not going to comment on whether this is or isn’t an effect right yet, though :wink: ).

It also serves to emphasize that trusted is not a property of any associated items, but rather for the implementation as a whole.


So now we’re back at three “basic” concepts:

  • trusted traits/implementations (including today’s unsafe trait/unsafe impl)
  • trusted types/values
  • unsafe functions/operations/block

I’ll try to think next about what kind of polymorphism we want each of these to support


I think that in order to be useful, the trusted annotation should be checked by the compiler, and usage of non-trusted traits should be linted on.

If a keyword is only used for human documentation, then it should be a comment.


Okay, back to this.


Do we need unsafe-polymorphism here? I argue that no, we do not. An unsafe operation (or function; I’ll just say operation in the future) may carry with it an arbitrary proof obligation, defined by the operation itself. Because it’s completely unbounded obligation, there’s no way for polymorphic code to understand what this obligation means. As an example, one example I’ve heard cited, is map.

Unsafe-polymorphic map, as a method on Iterator, would be something like:

?unsafe fn map<R>(self, f: ?unsafe impl FnMut(Self::Item) -> R) -> impl Iterator<Item=R>;

But what does this mean in the unsafe case? map really sensibly call f at all: map has no idea what the guarantees are. So map's implementor has no additional information about what they can do with f than they would have otherwise. The caller is still responsible for ensuring that, based on map's documentation, f won’t be called unsafely. This is just trusted fn map, then, which I explained above I don’t think is useful.

So the caller should just write || unsafe { ... }, discharging their proof obligations in the closure. Or they can still write unsafe around the map call, since unsafe penetrates syntactically into the closure. The only possible reason I could think of we might actually have a use for unsafe polymorphism here. In fact, it’s not even clear to me that unsafe fn types are inherently meaningful and useful, because they can only be used with out-of-band information about the proof obligation their value carries. I’d think that in principle, there ought to be an unsafe unsafe fn -> fn conversion. But it can be implemented via a closure so it needs no special support.

It also doesn’t make sense for trusted impls/values to be polymorphic over unsafe, for similar reasons. The proof obligation is flipped here: a trusted impl or value is promising stronger guarantees than “never commits UB”. unsafe relaxes the guarantee to “might cause UB”. So the polymorphism here would be “if the parameter is unsafe, I am not trusted.” But the parameter might as well be nose_demons(), so that just means “if the parameter is unsafe, I promise never to call it”. This rather defeats the point of passing it as a parameter.

Incidentally, this gives me my answer: unsafe is not an effect.


I can see five ideas for trusted-polymorphism at the type/value level:

  1. Functions whose results are trusted iff their inputs are.
  2. Applying trusted mutations regardless of whether the value is trusted.
  3. Allowing a field to be marked trusted independent of its parent type.
  4. Making a type’s trusted-ness contingent on that of a field.

The first is the most straightforward, and in my view the most natural case of polymorphism (#t is an effect variable here, even though I’m still not sure this is an effect):

fn into_bar<#t : ?trusted>(b: #t Bar) -> #t Foo { ... }

This seems important to have so I’m not going to dwell on it.

Trusted mutations are, in principle, similar:

fn rotate(&mut ?trusted self) { ... }

The idea here is that I promise to only do mutations that preserve trustedness. Given trusted starting state, I promise trusted ending state (I do not need to trust that I preserve trustedness throughout, because I have the unique reference so I can break invariants in the middle, though I need to watch out for panic safety), but I can operate perfectly safely on non-trusted values too; it just may be garbage in, garbage out.

I can’t just use &mut, because then I’m not making any promise about the operations I perform. I can’t take &mut trusted, because the conversion from &mut in the caller would be unsafe. So I end up needing polymorphism. But after a lot of thought I don’t think there’s any fundamental technology difference here, other than being forced to follow the restrictions of trusted mutations without the benefit of the assumptions possibly leading to annoying code.

The third and fourth relate to fields. The third is that I might have struct Foo { b: Bar } and want to have a type denoting "Foo with trusted b". I’m not sure we really have the technology for this, but we could invent it with a compelling use case. It’s also not really polymorphism but it feels like it fit here. The fourth is marking a field in such a way that f: trusted Foo would imply that f.b: trusted Bar. Being able to do this is merely convenience; Foo could promise this as an invariant and let unsafe code anywhere do the conversion.

Aside: do we actually want trusted types/values???

Ok, I’ll admit I got kind of lost in the weeds here. I don’t think that this is a useful feature in Rust as it exists, but this is an interesting direction to think about. It doesn’t really make sense to have a data structure with only one set of invariants that are safe, because the untrusted version would be kind of… bad. Imagine !trusted str, which would basically just be a [u8] with different methods. But we don’t want to say that trusted [u8] is always UTF-8, since that makes no sense. Polymorphism may also be useful when I look at traits below.

Newtypes right now are pretty heavyweight, because you have to implement all the methods and traits that you want to inherit. If they were more lightweight, we could have something like struct DefinitelyANumber(f32) with an invariant that it isn’t NaN. Then we could have impl ?trusted Ord for ?trusted DefinitelyANumber { ... }, and that in turn would enable polymorphism over whether a list of f32s promises not to have a NaN in it.

Of course, it’s a bit unfortunate that this ends up requiring new types, because there’s no difference in the untrusted case between f32 and DefinitelyANumber, and there isn’t really such a thing as trusted f32. So perhaps a direction would be to have named invariants, so that we could do something like impl ?trusted Ord for ?trusted[DefinitelyANumber] f32. This would also allow expressing multiple invariants simultaneously, without creating an n^2 problem.

So maybe this would be useful at some point.

And that took me way longer than expected, so I’ll handle polymorphism in the last case tomorrow.


@alercah Your “trusted types” to me sound a lot like what the literature calls “refinement types”, do you think there is a relation?


How is that different from the Trust* trait pattern we are already using, e.g. for TrustedLen? Seems to be the same for me, i.e. trusted impl could be understood as syntactic sugar.


Nice write-up on unsafe-polymorphism. One usecase I’ve heard people wanting is ergonomic unchecked indexing with arr[idx]. Ostensibly, you could do this with some sort of unsafe Index (putting aside inference issues for a moment…). As you point out however, it is unclear what the proof obligations are and who defines them. I see 3 possibilities:

  1. the proof obligation is defined by the implementing type (i.e: Vec<T>). This is mostly useless because we can’t handle Idx: unsafe Index generically.

  2. the proof obligation is defined by the caller of Index::index. This becomes very strange because then different callers of Index::index have different proof obligations but implementors of unsafe Index have no way to know about these.

  3. the proof obligation is defined by the trait (Index). For example, this could mean that the caller of Index::index must uphold that the element is within bounds. I think this is the only sane interpretation of unsafe Index that is also useful and makes me conclude that unsafe is an effect.

What I think could be useful is to qualify the proof obligations as in unsafe(sort_actually_sorts) MyTrait as you did in a previous comment.


I’m particularly interested in this idea of named invariants. I had the same thought while reading your first big post, for both the reason of “why is SortedVec a different type than Vec?” and “it bugs me that trusted/unsafe imposes very important requirements while the compiler has no information about what those requirements are, or even whether they’re the same requirements between one object and another”.

Named invariants could also (sometimes, but not always?) define a way to check them at runtime, which could help with debugging. (Obviously, the idea is that you wouldn’t check them at runtime in a release build, but it could help detect errors in debug builds the same way as our bounds checked arithmetic. Some invariants would be too slow to check all the time even then, but there could be the compiler flags for how aggressive the checks are.)

Might it be worth revisiting the idea of trusted parameters if their invariants could be named? Speaking of which:

Why not define the rules so that you must mark v and ix if the relationship between them matters? Or… Just spitballing here, but fn foo<T>(v: Vec<T>, ix: usize) where (v, ix): trusted[Vec::ValidIndex]


@RalfJung It’s similar to refinement types, except I didn’t propose any automatic (i.e. compiler-assisted) verification of predicates. It’s possible that this is a useful direction to take things in the future, though.

As for Trust* traits, I think that these are roughly the same idea, in a similar dimension. In particular, trusted[Len] impl for T seems to be very much the same idea (and since traits are bounds, we can already compose them together, like we could have T : TrustedLen + TrustedFused; we don’t need new technology like we would for types). Ord stands out, I think, because we are in a situation where we have backwards-compatibility issues (we have to support existing untrusted implementations) and there is potentially value in a non-trusted implementation, for things like sorting where we don’t need to risk unsafety.

@centril I agree that the third option is the only one that makes sense, but I’m not sure how it leads to the conclusion that unsafe is effect. If we have Index : unsafe Index, then the callee has to assume that it’s being called through a safe interface. This would only be useful for types implementing unsafe Index but not Index. For Vec<T>, say, the implementation wants to actually be able to condition on whether the caller makes the in-bounds guarantee, and that means it requires overloading, which isn’t effect-like.

@elidupree Yeah, named invariants (and refinement types, as I thank @RalfJung for reminded me exist) are a very neat space. Your idea for expressing multi-valued trusted relationships is an interesting one that’s definitely worth considering, but I think that it still suffers the same basic flaw that it doesn’t pass through the type system. In your example, what happens to v and ix inside the function? Are they treated as trusted values that suffer the same restrictions on mutation? Can the function pass the trusted (v, ix) relationship on to a future callee?

If we could actually make these pass along usefully, and error with a diagnostic like "(v, ix) is no longer trusted because v could have been modified here", then perhaps there’s an actually useful language feature down this path. I’m not sure. :slight_smile:


Let me think about this! It would definitely be cool if the compiler would be able to understand more about trusted values. For instance, imagine this (don’t mind the awkward syntax for now):

// Function with a postcondition
fn get_valid_index<T>(vec: &Vec<T>)->usize guaranteeing (vec, returnvalue): trusted[ValidIndex] {
  assert!(vec.len() >0);
  unsafe {0} // unsafe block needed to promise we uphold the postcondition
// Function with a precondition, like slice::get_unchecked()
fn use_valid_index<T>(vec: &Vec<T>, index: usize) where (vec, index): trusted[ValidIndex] {…}

fn do_something<T>(vec: &Vec<T>) {
  let mut index = get_valid_index (vec);
  // unsafe block NOT needed because the precondition is met by the postcondition of get_valid_index,
  // so if any memory errors result from breaking it, the fault can be attributed to
  // get_valid_index rather than do_something
  use_valid_index (index);

To go a step further, slice::get_unchecked() wouldn’t have to be a separate function – slice::get() could simply be polymorphic on whether the index was trusted!

Now, I think we might run into a lot of limitations, especially related to interior mutability. If index was a Rc<Cell<usize>> instead of just a usize, you wouldn’t be able to guarantee that it was still valid after some seemingly unrelated operations.

For now, I’m just going to throw together a bunch of ideas without making them rigorous, because I don’t feel like I understand the system well enough to make it rigorous yet anyway. Let’s start with how to handle ordinary mutation of trusted data:

// option 1: mutation removes trusted status
let mut index = get_valid_index (vec);
use_valid_index (vec, index); // ok
index += 1;
use_valid_index (vec, index); // compile error
// but other code can restore it, such as functions with postconditions
change_to_valid_index (vec, &mut index);
use_valid_index (vec, index); // ok
// and you can explicitly make a value trusted using an unsafe block
index += 1;
unsafe {guarantee (vec, index): trusted[ValidIndex];}
use_valid_index (vec, index); // ok

// option 2: mutated values are still trusted
let mut index = get_valid_index (vec);
unsafe {index += 1;} // unsafe block required because this isn't normally guaranteed to preserve the invariant
use_valid_index (vec, index); // ok
  let mut temporary_index = index; // move a trusted value to implicitly convert it to not-trusted
  temporary_index += 1; // safe to mutate values that aren't trusted
  change_to_valid_index (vec, &mut temporary_index);
  index = temporary_index; // no unsafe required because of the postcondition?
use_valid_index (vec, index); // ok

I suppose the above code would have to know that Vec doesn’t have any interior mutability.

Things get a bit more complicated when you want to take a &mut to one of the values in the trusted relationship. You might want to do this:

let index = get_valid_index (vec);
use_valid_index (vec, index);

But, help! vec.sort() implicitly takes &mut vec! Even though we know that the sort function can never change the length of the vector, the compiler doesn’t. (For now, never mind that we could make this simpler by using a slice instead of Vec – this is an example situation that might need to be dealt with for other things.) Intuitively, we might want some sort of &trusted[LengthPreserving] mut vec which is only safe to mutate in limited ways, but can be taken safely even when there are certain trusted relationships. But there are 2 challenges: first, the ValidIndex invariant has to permit creating such a reference, and second, the sorting function has to be able to accept such a reference. But maybe there’s a way to do those things. What if…

invariant ValidIndex<T>(vec: trusted[LengthPreserving] mut Vec<T>, index: usize);

Indicates that the invariant is implicitly maintained even when you do arbitrary length-preserving mutations on the vector. And although this invariant is written as applying to a mutable vector, it can implicitly be applied to a &Vec the way I was doing above, because &trusted[LengthPreserving] mut Vec coerces to &Vec.

The syntax for indicating permissible mutations might give some (disappointing) insight on the interior mutability issue – with interior mutability, you couldn’t define an invariant that was maintained even with a &T.

Now, what about the sorting function? Well, any slice operation is length preserving, so…

impl<T> ops::DerefMut<?trusted[LengthPreserving]> for Vec<T> {
    fn deref_mut(&?trusted[LengthPreserving] mut self) -> &mut [T] {
        unsafe {

which states that you can get a &mut [T] regardless of whether you need to preserve the length of the original vector. I’m also noticing that this is a concrete use case for, as you put it, “being forced to follow the restrictions of trusted mutations without the benefit of the assumptions”.

Well, that was sure some thinking. In summary, I haven’t gotten a system really pinned down, but I feel like this could go somewhere…


This is starting to also make me think about design by contract features, like in Eiffel.


I feel obliged to quote my last attempt at attacking this problem if we’re reaching that point.

I got exhausted from arguing with obviously unconvinced people at the end, so the top post is not fully up-to-date w.r.t the final state of the discussion. There were in particular some interesting points about unsafe values at the end which got close to what you are currently discussing.


I find it interesting/telling that every time (at least seemingly) that the topic of “What does unsafe mean?” or “How to make ‘unsafe’ more useful?” etc. comes up that the those new to the topic almost always end up getting to something like, “Well, what is needed is some kind of ‘Design by Contract’ system”.

It seems to be a pretty common theme/idea/conclusion that everyone eventually arrives at.

I think it is because that is really what “unsafe” is about. It marks places where there is an implicit contract that cannot be expressed through the type-system and syntax that must be upheld by callers. If only Rust had “design by contract” pre and post predicates then all would be well. This overlooks the fact that many of the kinds of implicit contracts that are in unsafe code could never be expressed as a meaningful predicate.

One way I think to tackle this, is to take every instance of “unsafe” and try to formulate predicates (pre/post) that if the compiler could verify them, “unsafe” wouldn’t be necessary. In many/most cases I think you’ll find that the kinds of predicates needed are simply unverifiable at compile-time and so would have to become effectively run-time assertions (which we already have) that panic.

At least that is the conclusion I came to in my own journey to understand “unsafe”.