Ergonomics of wrapping operations

It fixes all these problems for wrapping operations. While wrapping operations certainly are common, I worry that we "close the door" for other operations that should be implemented, e.g. my example from above, if you want to implement one's-complement arithmetic for dealing with IP headers.

1 Like

How about allowing arbitrary operators? Define a set of symbols +-/*%^.= that can be combined in any manner to create a new operator. To implement an operator, one needs to define a new trait that works with that operator.

trait BinaryOperator {
    type RHS;
    type Result;
    fn wrapping_add(lhs: Self, rhs: RHS) -> Result;

I think this is the crux of your argument, but I have to somewhat disagree: a type representing an odometer should never use non-wrapping operations. (To make this more targeted to the domains Rust is interested in, instead of odometer, think TimeInMilliseconds: u16, in which you're interested in the difference between two times, but allow that the time can wrap around pretty quickly.) A type-system is for statically declaring what operations make sense against which types of data, so an odometer-reading type should be prevented at the type-level from using non-wrapping operations. Most numbers should never use the wrapping operations, and the type-system should enforce this, too. Crypto is the special case, in that it should support both types of operations at once, and you've used the correct way of representing that (by naming the operations differently, rather than the types).

I understand that this means there are ergonomic issues with converting types between different arithmetic semantics. Those should be solved. It is, IMO, a general problem that Vec[NewType<T>] cannot be safely-and-explicitly casted to Vec[T], and there should be a general solution to that problem.

This could be tricky. There are places that we currently rely on numeric literals being known to be some sort of scalar type -- even if we don't know which one. Some sort of suffix (2w) would work fine. We might also be able to lift those restrictions from type inference; I haven't investigated deeply in a while.

I find this persuasive.

Mm, that’s unfortunate. I’m not too familiar with the internals - what do you mean by “some sort of scalar type”? Would it be possible to make Wrapping<T> a “scalar type”, for “scalar types” T?

Thanks, I found this very interesting.

If we want to add wrapping operators +%, -%, *%, etc. and also want to be complete, then we will also want to add assignment versions of those operators: +%=, -%=, *%=, etc. That’s two different axes of new operators! We’d be getting close to lens territory. (Of course, we could also choose to just forsake the desire for completeness.)

My thinking is that non-wrapping and wrapping math are common enough that it would be really good to support them with nice, ergonomic interfaces. Ones-complement / saturating math are rare enough that my thinking is its less important if they aren't quite as ergonomic. At least, that's my thinking which is likely very biased due to working on Rust-Crypto with lots of wrapping math and no ones-complement or saturating math.

1 Like

I shared this point of view until I tried to convert Rust-Crypto to use Wrapping - which I why I didn't contribute to the discussion's regarding creating Wrapping: I agreed, so, I had nothing to add. Doing the work to update Rust-Crypto, however, changed my opinion. It took a while - I tried very hard to make Wrapping work and I couldn't. A big part of it not working were all of the ergonomic issues that have been described here. Fixing those ergonomic issues would go a long way to making Wrapping more usable, although, its not clear to me that all of the issues are fixable. Even with all of the ergonomic issues fixed, though, I'm still uncomfortable with using a newtype to change the fundamental behavior of "+". I feel like having two variables next to each other both being added, you would expect to get the same behavior. However, if one of them is a Wrapping and the other is a bare type, you won't. I think a dedicated operator for wrapping math makes it much clearer what is going on.

Lets say we fixed all of the ergonomic problems and also that we had wrapping ops like Swift - ie: "&+" means "wrapping add". In that case, if Wrapping were defined to only permit the wrapping ops - that would be pretty interesting. In that case, this code would be fine:

let a = Wrapping(0u32);
a &+= 4;

But this would be an error

let a = Wrapping(0u32);
a += 4; // Error - trying to use non-wrapping ops on a Wrapping type

What I'm concerned about, without dedicated operations, is code like this:

b += 4;

Is that wrapping or non-wrapping? I can't tell without going to track down the (possibly type-inferred) declaration. So, Wrapping to make sure that you use the right operations sounds pretty interesting to me. Outside of the existing ergonomic issues, though, what I'm also concerned about is how Wrapping changes the operation being done without making it clear at the place where the operation is used.

1 Like

Is there any particular reason (aside from Self == Output not being possible) that Rust doesn’t just do this:

trait Op<RHS> {
    type Output;
    fn op(self, rhs: RHS) -> Output;
    fn op_assign(&mut self, rhs: RHS) where Self: Copy, Self == Output {
        let r = self.op(rhs);
        *self = r;
1 Like

That's a general tradeoff of monomorphic versus ad-hoc polymorphic (trait-based) code, though, not specific to +/+=.

First, thank you for taking the argument seriously.

I think that whether the change to the behavior of "+" should be considered fundamental is domain-dependent. I think that, for crypto, wrapping vs. overflowing + seems to be a fundamental distinction, and you'd want a way to have both operations available at the same time with different names (design idea for that is below): crypto algorithms will usually have to consider the full field of available values, so that behavior on overflow (including whether overflow is allowed) is a core part of crypto design consideration. But for a 16-bit current-time-in-milliseconds, we'd interpret a + b as "what will the time be b milliseconds after time a?" There is, I think, little-to-no fundamental difference between that and, on Unix, time() + 10, even though time()-values on Unix should use checked semantics instead of wrapping. (There is, however, a semantic distinction between time_t (for the LHS) and time_diff_t (for the RHS).)

[example of making numerical operations semantics explicit. using this would, I'm sure, want the casting problem I mentioned earlier to have a better solution. but this idea makes it even more explicit that overflowing vs. wrapping is considered than a + b vs. a.wrapping_add(b) do.]

struct CryptoNum<T: Int> CryptoNum(T);
trait CryptoAdd {
  fn overflowing_add(self, rhs: Self) -> Self;
  fn wrapping_add(self, rhs: Self) -> Self;
impl<T: Int> CryptoAdd for CryptoNum<T> { ... }

One solution, which I don’t think anyone’s mentioned yet, is using a syntax extension. We can expose a macro (let’s say, wrapping!()) which desugars arithmetic operations to their wrapping counterparts.

For example, this code:

wrapping! {
    let x = y + z * w;

would expand to

    use std::num::wrapping::WrappingOps;
    let x = y.wrapping_add(z.wrapping_mul(w));

This is a simple AST transform, and can be implemented as a library on

The only downside I see is that syntax extensions aren’t stable yet, but we can work around this issue by implementing it in libsyntax itself.


Perhaps trying to do this through the type system is overcomplicating the problem. This is really a numerical problem, not a linguistic problem. What you want is to reduce a numerical result by some modulus. The “%” operator is ready and willing to do that job. That’s well-understood and already part of the language. It’s just a performance issue to do it that way.

So just add an optimization for the “%” operator for unsigned operands where the modulus is a constant power of 2, and especially for powers of 256*(2^N), which, of course, are the word sizes for 8, 16, 32, and 64 bit unsigned arithmetic. No new syntax. It just works. Thank you.

@John_Nagle there are three problems with that: it requires carrying extra precision (including potentially negative numbers when doing unsigned arithmetic), it requires deferring the overflow check, and % isn’t really a “modulus” operator.

I tend to disagree here: This is not about the possibility of wrapping operations, but about the ergonomics. Requiring users to write (a + b) % 4294967296 (which either would require a –hopefully optimizable– cast to u64/i64 or be outright illegal) is highly unergonomic.

That said, I suspect apart from security-related work, most people will either use wrapping operations or checked operations throughout their code, with scarce occurrences of the other way. I for one would not use wrapping operations unless I explicitly need them.

1 Like

Modular arithmetic at word sizes is more of a machine artifact than a desirable behavior. There are a few common use cases, TCP sequence numbers, checksums, cryptographic algorithms, and big-number arithmetic being the classic ones.

The semantics of modular subtraction can be expressed as (a + k - b) % k, where k is the maximum unsigned value + 1. This is optimizable, numerically unambiguous, and independent of the machine architecture. (Admittedly you’re not likely to encounter a 36-bit CPU any more, although Unisys still makes some 36-bit and 48- bit machines.)

1 Like

I've implemented this solution here: GitHub - lambda-fairy/wrapping_macros: A macro for wrapping arithmetic

It handles binary + - * correctly, but doesn't deal with unary minus (#1). That problem could be fixed by re-introducing the WrappingOps trait, or at least adding a wrapping_neg method.

EDIT: I just realized we can translate -x to x.wrapping_mul(-1) instead. This side-steps the ambiguity problem because the literal is now on the right-hand side. I'll try this solution tomorrow.

EDIT #2: Since -1 is no longer a valid unsigned integer, I switched to x.wrapping_mul(!0) instead which has an identical bit pattern.

1 Like

I certainly considered macros – they offer a similar trade-off to #[wrapping]. One concern I had is that you wind up reproducing the expr grammar in your macro if you want to support all kinds of expressions (e.g., + y)) + z and so forth).