Forking from Restarting the int/uint
Discussion, because I think these deserve their own. All of them are in some way about treating code mixing different numeric types more liberally, and (for most of them) about thinking of the various fixed-size integer types as restricted subsets (intervals) of the full mathematical integers, which are thereby related in various ways, instead of as independent domains which must be explicitly casted between.
Implicit widening is the idea an integer type A
should be automatically coerced to another integer type B
if the conversion is “lossless”: if B
can represent all of the values which A
can. /u/sivadeilra gave a persuasive argument for this on reddit. Concretely, this would mean that the following automatic coercions would be added:
-
i8
->i16
-
i8
->i32
-
i8
->i64
-
i16
->i32
-
i16
->i64
-
i32
->i64
-
u8
->u16
-
u8
->i16
-
u8
->u32
-
u8
->i32
-
u8
->u64
-
u8
->i64
-
u16
->u32
-
u16
->i32
-
u16
->u64
-
u16
->i64
-
u32
->u64
-
u32
->i64
A delicate question is how to handle integers of platform-dependent size, namely the current int
and uint
(which will hopefully, as a separate matter, be renamed). Having the legality of those conversions depend on the target platform seems undesirable (not portable); doing automatic, potentially lossy conversions does likewise; and so requiring explicit casts seems like it might still be the least-bad option in this case. (If they have some guaranteed minimum size, then some conversions could be provided on that basis.)
What I’m going to call polymorphic indexing is the idea that arrays (or collections generally) could be indexed by any integer types, and not just uint
. This was suggested by @cmr on reddit. The thinking here, essentially, is that runtime index-out-of-bounds errors are going to be possible no matter what type you use, so there’s no reason why negative values of signed types, or values of types larger than the whole address space, couldn’t just lead to the same result.
Integer promotion is a C idea, (essentially) that the result of arithmetic operations between different types has the type of the largest operand type. So, for example, adding an
i16
to an i32
would be allowed, and result in an i32
. (C’s actual rules are more gnarly, e.g. everything smaller than an int
is promoted to an int
.) Building on RFC 439, we could do this straightforwardly, if we wanted to, by simply providing impl Add<i16> for i32
, and so on, between each relevant pair of types. This is not primarily a way to reduce overflow bugs, but rather, a bit like lifetime elision, just a useful shorthand to avoid having to write so many casts. A delicate question here is whether/how to handle promotion among signed, unsigned, and/or floating point types. (An argument could also be made that multiplication should promote to the next largest type; but this can’t work past 64 bits, and it’s not possible to prevent overflows in this manner in general, so it’s not clear that this would be a good idea on balance.)
Finally, along similar lines, what I’m going to call heterogenous comparisons could also be enabled by RFC 439. This would be accomplished by having impl
s of {Partial
,}{Eq
,Ord
} between each pair of integer types, and the result of the comparisons would be as-if both operands had been casted to an unbounded integer type beforehand. In other words, you could write n < m
where n: i32
and m: u8
and it would “do the right thing”: if n
is negative, the result is true
. (I got this idea from someone on the internet, as well, but I can’t remember from who, or where.)