The actual problems with a Miri for C have more to do with nondeterminism. C has lots of unsequenced points in it, a sort of local parallelism in the semantics for no particularly good reason (it almost never translates to actual parallelism but it licenses some function reorderings by the compiler). To properly catch UB in this situation, you have to analyze all interleavings (and even the non-interleaving versions, where causality is a partial order, not a total order). This is worst case exponential, because UB can be caused in any of those interleavings.
Ah, I think perhaps the point of contention with the notion of loosely-defined behavior is that efficiently producing reliable diagnostics for erroneous behavior requires partitioning all constructs into those whose behavior is fully deterministic and those which are erroneous. I will certainly grant that in many cases it is more useful to have a program stick with fully deterministic defined behaviors, and thus be compatible with such diagnostic tools, than to have a program which selects in unspecified or non-deterministic fashion from among a range of acceptable behaviors, which might run faster but would be incompatible with some of those diagnostic tools. No single approach can be best for all applications, though. In some cases it may be easier for applications to prevent integer overflows from occurring, thus allowing them to be usefully trapped, while in others it may be useful to allow them to occur in situations where multiple ways of processing the code might yield answers that differ in acceptable ways.
It may often be useful to have a program either offer assurances that it is not producing erroneous results as a result of integer overflow, or else simply indicate that it cannot offer such assurances because some integer overflows have occurred, and it cannot prove that none of them could have caused results different from those produced by arithmetically-correct computations. Having the program specify the exact circumstances were the first integer overflow occurred, even if that would be in computations whose output would otherwise have been ignored, might be useful for figuring out what went wrong, but reducing the cost of overflow checking to the point that it could be enabled even in production code processing actual data would for many purposes be more useful than diagnostics to determine whether any integer overflows could occur when the program is fed particular test data.