I’ve been wondering about this. I think the suitability hinges very much on what you call a type-system invariant.
I’ve been trying to write up some thoughts in response to this thread, and one of the insights I came to is that I was conflating various definitions of “abstraction boundary”. For example, one definition is:
- the scope where you find the justification for why things are safe
This lends itself to “bigger” boundaries. For example, in a function like this one:
fn foo() {
let mut x: i32 = 22;
let p: *mut T = &mut x;
let q = unsafe { *p };
}
Here, the unsafe block itself is not a boundary in this “logical” sense. Rather, the entire function is, since it is only based on the function body that I can prove that p is a valid pointer for me to dereference here.
But the words you used were different. They were:
- the scope within which the compiler cannot trust types
In that version, the code I wrote above is fine.
This to some extent comes back to the age-old debate about whether you should make “big” or “small” unsafe blocks. The “big” unsafe blocks people often argue for are more the “justification” style, and the smaller blocks correspond to “the points where I need to prove something” – which somewhat aligns with the “where we can’t trust”. At minimum, it actually seems to me to act more like a traditional barrier (that is, code can’t be moved around it), but one that is scoped to the enclosing “justification boundary”.
Put another way, I think that the unsafe keyword was designed to act as the “justification boundary” – but that’s not the boundary that the compielr cares about when optimizing, really. I mean, we can align the two – which is basically what my proposal was saying – but it will result in less optimization than what we would otherwise get.
Hope that this makes sense, I’m trying to write this in a more structured way, but these are my off-the-cuff current thoughts.