Unsigned integer type inference


This will generate the first error in the first post.


Oh, right :slight_smile:



You’ve got it mixed up (unless C and C++ differ in this point, because I checked C++ spec, not C one; I wouldn’t expect them to differ though)!

C++ requires that signed to unsigned conversion is done modulo respective generator like the rest of the unsigned math, i.e. (unsigned)-1 is equivalent to 0u - 1u.

On the other hand, bitwise negation on signed value is implementation defined. If signed values are represented in any other way than two’s complement, ~0 != -1. But the signed → unsigned conversion is defined in terms of numeric values, not bit patterns. Therefore (unsigned)~0 will not result in maximal value for unsigned on platforms with different representation than two’s complement.


Thanks for the clarification.

I guess it makes sense, given that the C/C++ specs were written (long ago) to support other numeric representations than 2s complement. I haven’t taken that into account.

Of course, nowadays this is all completely redundant: exotic machines with other systems are extinct by now so all hardware is 2s complement and Rust (and other upcoming languages) are all explicitly 2s complement.

Rust already warns about let a: u32 = -1; so everything is consistent. :smiley: