Ergonomics of wrapping operations


This could be tricky. There are places that we currently rely on numeric literals being known to be some sort of scalar type – even if we don’t know which one. Some sort of suffix (2w) would work fine. We might also be able to lift those restrictions from type inference; I haven’t investigated deeply in a while.


I find this persuasive.


Mm, that’s unfortunate. I’m not too familiar with the internals - what do you mean by “some sort of scalar type”? Would it be possible to make Wrapping<T> a “scalar type”, for “scalar types” T?


Thanks, I found this very interesting.


If we want to add wrapping operators +%, -%, *%, etc. and also want to be complete, then we will also want to add assignment versions of those operators: +%=, -%=, *%=, etc. That’s two different axes of new operators! We’d be getting close to lens territory. (Of course, we could also choose to just forsake the desire for completeness.)


My thinking is that non-wrapping and wrapping math are common enough that it would be really good to support them with nice, ergonomic interfaces. Ones-complement / saturating math are rare enough that my thinking is its less important if they aren’t quite as ergonomic. At least, that’s my thinking which is likely very biased due to working on Rust-Crypto with lots of wrapping math and no ones-complement or saturating math.


I shared this point of view until I tried to convert Rust-Crypto to use Wrapping - which I why I didn’t contribute to the discussion’s regarding creating Wrapping: I agreed, so, I had nothing to add. Doing the work to update Rust-Crypto, however, changed my opinion. It took a while - I tried very hard to make Wrapping work and I couldn’t. A big part of it not working were all of the ergonomic issues that have been described here. Fixing those ergonomic issues would go a long way to making Wrapping more usable, although, its not clear to me that all of the issues are fixable. Even with all of the ergonomic issues fixed, though, I’m still uncomfortable with using a newtype to change the fundamental behavior of “+”. I feel like having two variables next to each other both being added, you would expect to get the same behavior. However, if one of them is a Wrapping and the other is a bare type, you won’t. I think a dedicated operator for wrapping math makes it much clearer what is going on.

Lets say we fixed all of the ergonomic problems and also that we had wrapping ops like Swift - ie: “&+” means “wrapping add”. In that case, if Wrapping were defined to only permit the wrapping ops - that would be pretty interesting. In that case, this code would be fine:

let a = Wrapping(0u32);
a &+= 4;

But this would be an error

let a = Wrapping(0u32);
a += 4; // Error - trying to use non-wrapping ops on a Wrapping type

What I’m concerned about, without dedicated operations, is code like this:

b += 4;

Is that wrapping or non-wrapping? I can’t tell without going to track down the (possibly type-inferred) declaration. So, Wrapping to make sure that you use the right operations sounds pretty interesting to me. Outside of the existing ergonomic issues, though, what I’m also concerned about is how Wrapping changes the operation being done without making it clear at the place where the operation is used.


Is there any particular reason (aside from Self == Output not being possible) that Rust doesn’t just do this:

trait Op<RHS> {
    type Output;
    fn op(self, rhs: RHS) -> Output;
    fn op_assign(&mut self, rhs: RHS) where Self: Copy, Self == Output {
        let r = self.op(rhs);
        *self = r;


That’s a general tradeoff of monomorphic versus ad-hoc polymorphic (trait-based) code, though, not specific to +/+=.


First, thank you for taking the argument seriously.

I think that whether the change to the behavior of “+” should be considered fundamental is domain-dependent. I think that, for crypto, wrapping vs. overflowing + seems to be a fundamental distinction, and you’d want a way to have both operations available at the same time with different names (design idea for that is below): crypto algorithms will usually have to consider the full field of available values, so that behavior on overflow (including whether overflow is allowed) is a core part of crypto design consideration. But for a 16-bit current-time-in-milliseconds, we’d interpret a + b as “what will the time be b milliseconds after time a?” There is, I think, little-to-no fundamental difference between that and, on Unix, time() + 10, even though time()-values on Unix should use checked semantics instead of wrapping. (There is, however, a semantic distinction between time_t (for the LHS) and time_diff_t (for the RHS).)

[example of making numerical operations semantics explicit. using this would, I’m sure, want the casting problem I mentioned earlier to have a better solution. but this idea makes it even more explicit that overflowing vs. wrapping is considered than a + b vs. a.wrapping_add(b) do.]

struct CryptoNum<T: Int> CryptoNum(T);
trait CryptoAdd {
  fn overflowing_add(self, rhs: Self) -> Self;
  fn wrapping_add(self, rhs: Self) -> Self;
impl<T: Int> CryptoAdd for CryptoNum<T> { ... }


One solution, which I don’t think anyone’s mentioned yet, is using a syntax extension. We can expose a macro (let’s say, wrapping!()) which desugars arithmetic operations to their wrapping counterparts.

For example, this code:

wrapping! {
    let x = y + z * w;

would expand to

    use std::num::wrapping::WrappingOps;
    let x = y.wrapping_add(z.wrapping_mul(w));

This is a simple AST transform, and can be implemented as a library on

The only downside I see is that syntax extensions aren’t stable yet, but we can work around this issue by implementing it in libsyntax itself.


Perhaps trying to do this through the type system is overcomplicating the problem. This is really a numerical problem, not a linguistic problem. What you want is to reduce a numerical result by some modulus. The “%” operator is ready and willing to do that job. That’s well-understood and already part of the language. It’s just a performance issue to do it that way.

So just add an optimization for the “%” operator for unsigned operands where the modulus is a constant power of 2, and especially for powers of 256*(2^N), which, of course, are the word sizes for 8, 16, 32, and 64 bit unsigned arithmetic. No new syntax. It just works. Thank you.


@John_Nagle there are three problems with that: it requires carrying extra precision (including potentially negative numbers when doing unsigned arithmetic), it requires deferring the overflow check, and % isn’t really a “modulus” operator.


I tend to disagree here: This is not about the possibility of wrapping operations, but about the ergonomics. Requiring users to write (a + b) % 4294967296 (which either would require a –hopefully optimizable– cast to u64/i64 or be outright illegal) is highly unergonomic.

That said, I suspect apart from security-related work, most people will either use wrapping operations or checked operations throughout their code, with scarce occurrences of the other way. I for one would not use wrapping operations unless I explicitly need them.


Modular arithmetic at word sizes is more of a machine artifact than a desirable behavior. There are a few common use cases, TCP sequence numbers, checksums, cryptographic algorithms, and big-number arithmetic being the classic ones.

The semantics of modular subtraction can be expressed as (a + k - b) % k, where k is the maximum unsigned value + 1. This is optimizable, numerically unambiguous, and independent of the machine architecture. (Admittedly you’re not likely to encounter a 36-bit CPU any more, although Unisys still makes some 36-bit and 48- bit machines.)


I’ve implemented this solution here:

It handles binary + - * correctly, but doesn’t deal with unary minus (#1). That problem could be fixed by re-introducing the WrappingOps trait, or at least adding a wrapping_neg method.

EDIT: I just realized we can translate -x to x.wrapping_mul(-1) instead. This side-steps the ambiguity problem because the literal is now on the right-hand side. I’ll try this solution tomorrow.

EDIT #2: Since -1 is no longer a valid unsigned integer, I switched to x.wrapping_mul(!0) instead which has an identical bit pattern.


I certainly considered macros – they offer a similar trade-off to #[wrapping]. One concern I had is that you wind up reproducing the expr grammar in your macro if you want to support all kinds of expressions (e.g., + y)) + z and so forth).


Ah, I see you used a syntax extension rather than macro-rules. That is also an option but of course not currently stable. =)


Hey there, I notice this is listed as “ideas (deprecated)”, is this because the discussion is old or is there updated information about the topic? I for one am quite interested in having better ergonomics for wrapping arithmetic. The proposal I like most in this thread is the addition of sugared operators a la Swift, as it seems like it would be the least-intrusive way to improve the situation.

With the Wrapping type as it is now, I feel like I should just be able to replace all of my i32's on my struct with Wrapping<i32>'s and bang, now all my arithmetic wraps and it’s no problem. But of course, since that requires all my code that interacts with the struct to now also use Wrapping types, this is pretty painful.

Like the crypto and emulator projects mentioned above, I also have a project (emulator) that relies on wrapping arithmetic to function properly across a fairly large piece of code. Currently I simply ignore the problem and only run release builds, and clearly this is not ideal. I’ve started to fix it, but it’s pretty painful to work with the wrapping stuff as it is now. The best option I have currently is to follow @DaGenix’s example and replace all my arithmetic with wrapping_add etc calls, which will work, but makes the code much less readable IMHO.

Is this discussion simply over, shelved, or what? I’d really like to hear what the current thoughts are on the matter.


The ideas category it self is deprecated, not necessarily this topic.