In addition to what @Ixrec wrote, may I point you at this blog post of mine?
The short summary is that UB is a contract between the compiler and the user; it is a way for the compiler to make assumptions during optimizations that it would have a hard time determining with its own analyses, so it asks the user to prove them instead.
The term “undefined behavior”, and indeed its treatment in C, seems to sometimes lead to the impression that undefined behavior is about not defining something. That’s not really true. It is important to have a precise definition of which programs have undefined behavior, because compiler authors need to know exactly what they can rely on and programmers need to know exactly what they have to prove! What is “undefined” about UB is what the program does when it exhibits UB. This is very different from defining whether a program exhibits UB.
Good point. This is what I think is usually called “external non-determinism” in the literature.
I agree that this “internal” non-determinism is different, but I disagree that it only happens in theory. It happens in practice, and is very observable, for example with concurrency: The following program may print either 0 or 1, and the compiler is free to optimize it to a program that always prints one of the two things:
fn main() {
let x = AtomicUsize::new(0);
rayon::join(
|| x.store(1, Ordering::Relaxed),
|| println!("{}", x.load(Ordering::Relaxed)
)
}
This program inherently has several ways in which it can execute (independent of what the environment does). The compiler is at liberty to preserve this non-determinism or reduce it, ruling out some possible executions. Many optimizations the compiler performs around memory accesses have the effect of ruling out some executions.
If any of these branches has UB, the compiler can hence eliminate all branches except for the UB one, and then proceed assuming the program has UB.
I am surprised that this is UB. x_inner has nothing to do any more with x, it is a copy (reborrow) of the inner reference, inside the Option. Does this happen because taking a borrow out of x means that x's lifetime has to outlive x_inner?
Correct. This is needed, I think, do get the desired behavior from methods like split_at_mut that “reborrow” through a raw pointer: After let y = slice.split_at_mut(2).0, when slice is used again, y should become invalid just like it would if this was a normal reborrow.
Come again? How are those defined, and how do they behave when casting from and to integers?

Hm, so with Stacked Borrows something like this might work if initiating a reference happens on its first use, not on its creation. (No idea where there is a “cactus” here.^^) But that seems way too permissive to me.
What is the argument for making that code legal? I cannot see the connection to the example in the issue you linked. And I think allowing what you asked for in that issue is in direct contradiction to optimizations we might want to allow, like inserting spurious write to mutable references if we know that does not change the value (could introduce a data race if this is actually a shared reference).
“mutability polymorphism” should be solved on the type-system level, not by weakening what the reference types mean.