Iâm happy that weâre taking these steps towards better-is-better in the final stretch.
With regards to int/uint, see my comment on @CloudiDustâs RFC. I think the other two issues were the more important ones, but I hope this decision will also be reconsidered with the newly suggested alternatives intx/uintx or intp/uintp, which address some of the concerns (unfamiliarity) with the previously suggested names.
With regards to the definedness of overflow, please read the RFC and accompanying discussion thread. I tried to make it painfully clear that this is not intended to allow the compiler to optimize based on it. That would be crazy, since the whole point is to increase correctness and reliability, not decrease it! The fact that LLVMâs undef cannot be used to represent an overflowing value is explicitly noted in the RFC. What @lifthrasiir wrote is basically correct, except that overflow checks are not an essential part of memory safety (otherwise current Rust would be memory-unsafe!); we have a lot of liberty in terms of what knobs we offer to end-users to control where and when run-time checks get enabled (for which the only relevant factor thatâs considered should be performance). The decision in this threadâs OP is to use a debug/release build distinction, which Iâm fine with, at least initially; later we can add more fine-grained options like scoped attributes.
More concretely, I think the semantics of overflow should be to either signal an error somehow, or to lower directly to the platformâs native hardware instructions (which usually means twoâs complement these days). There should also be room for things like delayed exceptions, âas infinitely rangedâ semantics / what Ada does (see RFC).
Arenât you concerned about de facto lock-in for overflow?
There is certainly a concern that we will not be able to make
overflow stricter without breaking code in the wild. We believe that
having overflow checking be mandatory in debug builds should at least
mitigate that risk âŚ
I think another fact which will work in our favor is simply that wraparound is so rarely the desired semantics (again, basically only for hashes and checksums), so itâs unlikely that people will end up relying on it for this reason alone. The only other situation where itâs useful is the associativity of addition and substraction operations (i.e. maybe one operation overflows, but the next one âbrings it back in rangeâ); this is a more legitimate worry, but hopefully this is something that could be accomodated longer term with some AIR-like advancements.
Finally, I just want to note again that while all of the emphasis tends to be put on overflow, I suspect that the more important benefit will actually be around underflow of unsigned types. As I noted in a comment on the RFC, C++ style guides tend to recommend int even for may-only-be-positive values, simply because you can at least add asserts, while underflow of unsigned types is silent and undetectable. But with this, if we use an unsigned type, all of the shouldnât-become-negative asserts in all of the relevant places get added for us automatically! So using unsigned types suddenly becomes meaningful again.
One more thing: why was a WrappedInt type considered a better alternative than a WrappingOps trait implemented by the built-in integer types? (For reasoning in favor of the latter, see RFC.)