When developing a library that can be used on different cpu architectures, it would be very beneficial if the programmers could easily specify and track the minimum required integer size while still using the more efficient cpu integer size (register width) size for their processor.
Specifically, many programmers will specify a u64 even when a u8 or u16 would work.
Later, when that code is used in a smaller cpu, say an 8 or 32 bit cpu to make the point, the software gets unnecessarily slower because it's processing 64 bit variables even when the value knowingly needs less space.
Add more integer types:
i8min, u8min, i16min, u16min, or i8min, u8min, i16min, etc. The compiler could enlarge them to the optimal width for alignment. This might even be helpful with f32 and f64's. f64s run very much slower on hardware that doesn't support an f64 math co-processor.
In debug mode, the same bounds checking could be applied for the chosen integer size, but release code would run more efficiently on both big and small processors.
It goes nearly without saying that this is most helpful when the author of the library uses the smallest type possible. This feature will be very important for the use of rust in 8 bit cpus.
This message was heavily edited to reduce confusion. Originally it was suggested to call the variable u8arch for example which caused some understandable confusion.