"The optimizer may further assume that allocation is infallible"

It does currently!

pub fn test() {
    let x = [0u64; 128];  // sub	rsp, 1104
    // let x = [0u64; 8]; // sub	rsp, 152
    println!("{:p}", &x);
    test() // call	qword ptr [rip + playground::test@GOTPCREL]

Formatting IO is a bad example to pick; it's extremely opaque to the optimizer and blocks most optimization as if it were a fully unknown extern fn.

Except that's exactly what's under question. The question is how to justify optimizing out resource consumption in the face of code being able to observe states which would imply resource exhaustion has occurred.

No practical program is going to exhaust the memory space in a way that both the program can tell and the optimizer can keep it from happening. But if you want your compiler to be correct, "practical" isn't an argument; you need to be correct for all valid programs.

The "solution" is "don't do that." But Rust exists because we dare to believe that "don't do that" isn't a great solution, and there are better options available.

In non-generic code, it is, due to the division of compilation units. For generic/templated code, it's comparable.

malloc semantics in LLVM are actually achieved by putting noalias (restrict) on the return type, so it's present, just invisible; the semantics are inferred rather than added by the user.

Either we say live objects can have the same .addr() or .addr() becomes sideeffectful. Since .addr() semantically only returns part of the pointer, it's much simpler to justify .addr() overlap than it is .expose_addr() overlap.

Sure, on one hand it's just shifting the problem. But in shifting the problem, it's shifting it into a paradigm that acknowledges (more of) the problem. After all, what gets more people to acknowledge the difference is CHERI making pointers 128+1 bits and .addr() still 64 bits, which is obviously a modular reduction of pointer space.

I agree here, which is why object overlapping / resource (non) exhaustion are relatively lower priority "holes" in the model. We very much would prefer to plug them so it's possible to fully justify optimizing transforms as being correct, but working with the softer world of pretending it's a non-issue typically is sufficient.