This basically requires you to formalize the application requirements in a way that the compiler can understand them. And then the compiler would have to do some code synthesis or automatic theorem proving or so to make sure that this specification is always upheld. I don't think anything like this is even remotely realistic with the current state of the art.
It isn't necessary to express all possible sets application requirements perfectly, in a manner that would avoid mandating anything beyond what is actually needed. The more accurately programmers can express what is and isn't needed, however, the more efficiently an optimizer should be able to meet the actual application requirements.
My complaint is that the developers of compilers as well as the LLVM language seem to have been operating off the assumption that the C Standard sought to fully and accurately describe all the things that applications should need to do, and thus modeled the range of concepts that can be expressed in LLVM off those that could be specified in C without UB, ignoring the fact that the authors of the Standard stated back in 1992 that UB was intended to, among other things, "identify areas of conforming language extension", and that many real-world tasks that wouldn't be supportable on all imaginable platforms be performed using such "popular extensions". The design of LLVM thus suffers from being focused more upon what the C Standard mandates, than upon the varied need of actual applications. Unfortunately, designs of languages which would generate LLVM are severely constrained as a consequence.
The concept of having non-deterministic values, but allowing programmers means of "freezing" them, would allow optimizers a huge level of flexibility, while being fairly easy to validate if programmers forcibly freeze things at the places that would be necessary to keep the range of possible behaviors from exploding beyond comprehension. If a huge number of objects might end up in non-deterministic states, but their value will end up ultimately being frozen or ignored, it should often be possible to reason about the overall behavior of code that generates them (fill the objects with unspecified frozen values) without having to examine the behavior of all the individual steps therein. Such reasoning will be trivial if none of the individual steps could have any side effects beyond yielding possibly non-deterministic values. It will be massively more difficult, however, if any of the individual steps could produce "regard as impossible any situation that would lead to this" UB.
Further, there are many situations where a programmer might know that a situation will never arise when given valid data that it would be required to process usefully, but not be certain that it couldn't arise as a consequence of invalid data. Such situations could be most usefully handled by a directive whose meaning would be "in debug mode, trap; in release mode, at the compiler's convenience, either ignore this directive or trap so as to make the following code unreachable". In cases where an "assume unreachable" directive would have awarded big benefits, a compiler could still reach those benefits minus the cost of the trap, but unlike the former directive which could arbitrarily alter program behavior, this alternative directive would only have two possible consequences: (1) trap, or (2) do nothing. IMHO, for many purposes, that would seem to offer a much better compromise between safety and performance than an "assume unreachable" directive.
A compiler need not find all possible benefits that could be received from such directives in order to make them useful. In many cases, the directives would provide a concise way of documenting preconditions that would need to hold in order for a piece of code to behave usefully even if compilers ignored them, and in many cases it would be fairly easy for a compiler to reap benefits by evaluating the cost of processing the following code in cases where e.g. x is known to be even [which would among other things allow x/2 to be replaced with x>>1], compared with the cost of processing the code if that were not known to be the case, and if the difference exceeds the cost of a bit test and trap, generate a bit test and trap. Note that even if a compiler guessed wrong as to which course of action would be most efficient, both behaviors would be correct; proving the correctness of the transformation would thus be trivial..
If languages sought to give programmers ways to invite optimizations, there are many things they could do which would be far easier to prove correct than optimizations based upon UB. There may be a few applications for which UB-based optimizations would be a good fit, but for most applications they make things needlessly difficult for programmers and compiler writers alike, compared with what better languages could offer.