That would make it depend on optimizations if an overflow error is emitted. This is going to be very confusing for programmers and will make it really easy for different compiler versions or optimization levels to break programs.
If one us using overflow checking for the purpose of determining whether a program would run correctly on all platforms, having such checks optimized out would make it less useful. If, however, the purpose of overflow checks is to guard against the possibility of a program producing seemingly-vallid-but-wrong results as a consequence of overflow, the fact that a program is able to produce correct results on one system in circumstances that would exceed the abilities of another isn't a problem.
Besides, I'd find any confusion caused by the fact that a program works correctly in cases where it wouldn't be required to do so as trifling compared with optimizers like a certain goofy C compiler that will use calls to a function like:
unsigned mul_mod_65536(unsigned short x, unsigned short y)
{ return (x*y) & 0xFFFFu; }
to infer that the value of x within the caller will never exceed 0x7FFFFFFF/y.. Note that the authors of the Standard have indicated that they would expect that commonplace implementations would process functions like the above in a fashion equivalent to using unsigned math whether or not the Standard compelled them to do so, implying to me that the reason such treatment isn't mandated is that the Committee thought the only implementations that would do otherwise would be those where unsigned math would be expensive, which hasn't been the case for decades.
Having a program report precisely when all overflows occur is expensive. Having it report what calculations cannot be regarded as reliable would be much cheaper. The cost difference between precise overflow trapping and lazy (but still reliable) overflow detection is much larger than the cost difference between that and "overflow is allowed to do everything" semantics used by some C compilers.