One thing that was deliberately postponed in the overflow RFC were ideas on how to make wrapping operations more ergonomic. The current RFC simply includes methods like wrapping_add
and so forth that can be applied to integers. For example, one can do:
fn something(y: i32, z: i32) {
let x = y.wrapping_add(z);
}
This is fine for a small number of operations, but it can be quite painful when done at scale. The RFC also added a Wrapping
type:
fn something(y: i32, z: i32) {
let x = Wrapping(x) + Wrapping(z);
}
However, this option too suffers from some ergonomic downsides:
+=
doesn't work yet on overloaded operations. (But see RFC 953, which I hope we can approve relatively soon and have available in Rust 1.1).Wrapping
doesn't (currently, at least) interoperate well with integer literals, so you can't writex + 1
wherex
hasWrapping<i32>
type, you have to writex + Wrapping(1)
.- Some people feel like the "new types for modulo" isn't the right approach, because the fact that wrapping arithmetic is used is more a property of the calculation being done, not the types flowing in or out. (For example, having a hash function take
&mut [Wrapping<u8>]
instead of&mut [u8]
feels like exposing impl details.) Of course, others feel the opposite. I have some sympathy with both sides.
Design space
It's clear we can do better. I see three options.
A. Nothing major, incremental improvements to Wrapping
.
We adopt RFC 953 so that x += y
works. We add impls that allow Wrapping<T>
to be added to plain T
so that x + 1
works.
B. Swift-like operators.
Swift has introduced a parallel set of operators &+
and so forth to indicate modulo operations. We could do this, but it is not clear to me that this use case is sufficiently large to warrant a complete parallel set of operations -- this is a lot of operator real estate. For example, if we were going to add variations on +
, maybe we want variations that target SIMD or other collective operations, etc.
C. Scoped wrapping semantics of some kind.
The original RFC proposed a kind of lint-like scoping system that allowed overflow checks to be controlled at a fine-grain. It intentionally did not, however, allow overflow checks to be completely disabled in this way. The reasons against were well-summarized by @rkjnsn (but do read the whole thing):
Using scoped annotations to change the semantic meaning of operations seems very, very wrong. As you mention, one would have to search all enclosing contexts to determine whether operations were meant to be wrapping, and it still wouldn't be clear if it were for performance reasons (overflow is still incorrect) or algorithmic reasons (overflow is expected). Combined with the other points you make, I would find using scoped annotations to determine whether wrapping is desired to be completely unreasonable.
Considerations
Some other considerations to keep in mind no matter what solution:
- What happens with overloaded
+
?
A specific proposal
One option I've been turning over in my head is a variant on Option C. The idea would be to permit a #[wrapping]
annotation to be placed on blocks or fns (but not modules or crates). When placed on a fn, it would affect the code in that fn, but not the code in nested fns. This would affect the semantics of potentially overflowing operations within that block to disable checks and enable the "fallback" semantics. It would interact with overloading by changing the +
operator so that it is not connected to the Add
trait but rather the WrappingAdd
trait (and so on for the other operators). Here are some examples of how this might look:
fn foo(y: i32, z: i32) {
#[wrapping] {
let x = y + z;
...
}
}
The reason I find this approach appealing is that:
- it doesn't require new syntax (which seems like overkill, given the magnitude of this problem);
- particularly for large blocks of code where lots of math is going on, this requires just one annotation, rather than requiring you to convert every operator. It seems less prone to errors where most of the operations are using
&+
but a few of them are accidentally left with+
. - for individual options,
y.wrapping_add(z)
remains an option. - limiting the attribute to fns and blocks means that it should not be hard to decide whether a given operation is potentially overflowing or not.
One downside that is not addressed is the contention that people may use this attribute to identify perf-sensitive bits of code and then turn on overflow checks everywhere else, leading to semantic ambiguity (was that overflow anticipated or not?). This is a risk. It could be addressed by providing a means to control overflow checking in optimized builds -- basically saying "we want checks even when optimized, but not here". This mechanism would not affect checks in debug builds, making it clear that overflow is still not an anticipated result. This mechanism may also be overkill, I'm not sure.
Timeframe
It's not clear to me just how fast we have to act on this. We have a coherent story but also known ergonomic problems. In particular I'd like to make sure we have a sense of the full set of problems we want to address -- that might help inform decisions about how much control to put into the design. For example, I don't know whether we will need the ability to request (and then also disable) overflow checks in optimized code or not. (It would also be useful to have a firm figure on the runtime cost of overflow checking; the current implementation is fairly naive and it's likely we can optimize quite a bit if we choose.) But I'd still like to kick off this conversation so that we get a sense of what ideas are out there.