I guess the argument would be that implicit widening could make transmute even more dangerous and error-prone than it already is ?
This also applies to e.g. serde's visit_u64
versus visit_u16
. When serializing a u16
, if number widening is implicit, you can accidentally serialize four times the bytes by calling visit_u64
instead of visit_u16
, whereas currently because it isn't, the compiler will error when you call the wrong method.
Reading this I had an interesting observation: that problem doesn't happen in languages with overloading, as it'd just be Visit
and the overloading would suppress any implicit widening.
(I'm not sure that's an actionable statement, since I'm not about to propose we add [more] overloading to Rust, but it's a connection between features I hadn't considered related before.)
IMO the fact that it can occur in other situations too actually exacerbates the problem. Sure, there is some inherent risk in transmutation. However, increasing that risk by opening up new opportunities for "sloppiness" (for the lack of a better word) looks bad to me.
I take the issue as, rather than specific to transmuting, that u32
has the semantics of "32-bit array" in addition to the semantics of mathematical integer Z (assuming integer overflow is a UB).
Other operations that make no sense as an integer are:
-
leading_zeros
: integers don't have "most significant bit".trailing_zeros
can be defined for non-zero integers, however. -
overflowing_add
etc.: safe overflow reveals the fact the supposed integer is, in fact, a finite thing. -
to_be_bytes
:x.to_be_bytes()[0]
depends on bit width ofx
. Butto_le_bytes
doesn't depend on bit width if the result is treated as an infinite list.
u32
-> u64
is sound if u32
-> Z
is sound. My opinion is thus we shouldn't insert implicit widening at every argument position because there are incompatible methods.
Regarding "indexing by signed integer", I'm pondering this situation:
fn get_i128<T>(slice: &[T], index: i128)->Option<&T> {
slice.get(index)
}
At the moment, this doesn't compile. The naive approach – which is an easy mistake to make – is this:
fn get_i128<T>(slice: &[T], index: i128)->Option<&T> {
slice.get(index as usize)
}
This behaves incorrectly, because the integer -2^64 is not a valid index into the slice, but get_i128(slice, -(1<<64))
would return Some (assuming the slice has nonzero length), because (-(1<<64)) as usize
is 0usize
. The correct implementation is
fn get_i128<T>(slice: &[T], index: i128)->Option<&T> {
index.try_into().and_then(|usize_index| slice.get(usize_index))
}
which is more verbose and less readable. In this case, it can be shortened to
fn get_i128<T>(slice: &[T], index: i128)->Option<&T> {
slice.get(index.try_into()?)
}
but normally the situation of indexing by a signed integer would arise in the middle of a longer function, where this improvement isn't available.
(EDIT: This next thing wasn't quite correct; see CAD97's reply below:) I don't have a specific proposal for how to improve this (I assume it would be a breaking change to make get()
take a polymorphic argument, and adding a separate method would have issues with discoverability and bloat). I just wanted to make a note of the pitfall.
I guess this is similar to what newpavlov said more concisely earlier. But it's a bummer that adding more Index
impls wouldn't be able to extend the same protections to get()
.
slice::get
is already polymorphic: it takes an argument of type impl SliceIndex<[T]>
. It's the same trait as used for the Index
impl, so extending one is equivalent to extending the other.
Oh, oops! I must have confused it with something else.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.