I fully admit that I may be going a bit aggressive here, and may be over-applying a stretched conclusion. But I hope with the other notes here I can argue why this wouldn’t be a horrible place to end up.
The intent of the direction I pulled this in, being the end of unsafe {} blocks, is one that I’ve seen in a few places: no one seems to be exactly certain where an unsafe block should end. The nomicon discusses this: changing safe code can break unsafe variants because safety is a global concern.
I don’t see any reasons where this would require unsafe blocks around safe code. Safe code, by definition, cannot put values into an unsafe state. I guess, in the overly-aggressive-sanitizer specification, inbetween every line of code is an assertion that every place is valid, and inbetween every safe line of code is an assertion that every place is safe. (In my example I’m using field assignment as a proxy for writing out unsafe setters.)
This means that it’s still fine for safe code to block calls to unsafe temporary violations of safety, just that the unsafe {} block delimits the scope in which values may be not in a safe state.
I don’t think it’s an unreasonable goal to say that structs should minimize the amount of time they’re in a not-safe state; after all that means that you have to be really picky about panic points while a type is in an inconsistent state. It seems useful to say “this isn’t in an unsafe block, thus the types are in a safe state”, which also implies panic safety.
My interpretation boils down to whether this is the desired state of affairs. If we say that a reference is only safe if it points to a safe T, then it falls out of that definition that where safety holds, dereferencable-n-times and thus the de-ref optimization is valid.
I don’t think it’s much of a burden to require arguments to a safe function to be in a safe state. And the de-ref optimization shows that it’s useful in at least one case. I don’t disagree that a singular model covering safe and unsafe isn’t simpler, though.
I know at least for my code, however, I have a strong preference for the former blocking of the two, as the unsafe {} block should be the unit of unsafety where possible.
I realize that safe code putting a same-module type into an unsafe state complicates things a lot, and the musing I’m outputting is likely more restrictive than the eventual model wants to be. But I think it’s an interesting way of considering it, at the very least.
I tend to make and use unsafe setters rather than doing direct field access for most of my types anyway, so sometimes I forget that field access is always safe 
The way I’ve always thought of partial initialization is that the compiler treats them as distinct fields until the full place is initialized. So to use the common example:
let tuple: (T, U);
// do some work
tuple.0 = make_t();
// do some work with tuple.0
tuple.1 = make_u();
// do some work with tuple.1
return tuple;
This would become, effectively:
// do some work
let tuple.0 = make_t();
// do some work with tuple.0
let tuple.1 = make_u();
// do some work with tuple.1
return (tuple.0, tuple.1);
with the additional guarantee that tuple.0 and tuple.1 are arranged in memory such that tuple is just the memory place including both tuple.0 and tuple.1, and is valid once both tuple.0 and tuple.1 is valid.
So from that, we can derive that partial initialization of a partially uninhabited^1 place is allowed, just that it is impossible for the place to be fully initialized. The valid subset of the place can be initialized but the uninhabited subset cannot be validly initialized.
[^1]: where “uninhabited” is defined as lacking any valid members