This will generate the first error in the first post.

# Unsigned integer type inference

**jan_hudec**#23

Aside:

Youâve got it mixed up (unless C and C++ differ in this point, because I checked C++ spec, not C one; I wouldnât expect them to differ though)!

C++ requires that signed to unsigned conversion is done modulo respective generator like the rest of the unsigned math, i.e. `(unsigned)-1`

is equivalent to `0u - 1u`

.

On the other hand, bitwise negation on signed value is implementation defined. If signed values are represented in any other way than twoâs complement, `~0`

!= `-1`

. But the signed â unsigned conversion is defined in terms of numeric values, *not* bit patterns. Therefore `(unsigned)~0`

will *not* result in maximal value for `unsigned`

on platforms with different representation than twoâs complement.

**yigal100**#24

Thanks for the clarification.

I guess it makes sense, given that the C/C++ specs were written (long ago) to support other numeric representations than 2s complement. I havenât taken that into account.

Of course, nowadays this is all completely redundant: exotic machines with other systems are extinct by now so all hardware is 2s complement and Rust (and other upcoming languages) are all explicitly 2s complement.

Rust already warns about `let a: u32 = -1;`

so everything is consistent.