Support for comparison operators between different integer types

Is there a reason for not allowing comparing different numerical types without as casts?

fn main() {
    let a: i32 = -1;
    let b: u64 = 2;
    // none of these compile
    a < b;
    a > b;
    a == b;
    a != b;

I understand why arithmetics (add, sub, etc) are not allowed and require explicit casting to a common type: it's not always clear what the output type should be. But I don't see any reason why comparisons couldn't be built-in: the output type is always the same (it's bool) and the operations have clear semantics:

  1. iX and iY: cast to whichever size is bigger and compare (same for uX and uY)
  2. iX and uY: if iX is negative than false (or true, depending on the operation), otherwise cast iX to uX and goto 1
  3. isize can be treated just as the iX, whichever X size fits (same for usize)

To explain my motivation, I have a project that has hundreds of these casts specifically to compare the data points, and:

  • it actively hurts readability (especially if the operands are more complex than a variable name, and this forces me to enclose them in parenthesis sometimes)
  • it interrupts my mind flow when I write the code - I don't care which common type it'll be, why the compiler couldn't figure it for me?

So, would the compiler team be willing to accept a PR that implements this functionality? Does a change like this require an RFC?


I'm not sure if this'd be a problem here, but inference is a potential pitfall. While there's only a single possible integer to compare with, unbounded integer literals will infer the same type as the other side of the comparison. With multiple possibilities, they'll infer i32 (even if i32 isn't an option).

This more often comes up in respect to more indexing impls, but it's likely relevant here as well.

One potential reason to not is that filling out the matrix is a large(ish) number of impls (incl both directions). But I think these impls make sense to at least put through a crater run.


Note this wouldn't require a compiler change. You just need to implement PartialOrd for the right combinations of types. So this is a libs team decision.

1 Like

To make this more concrete:

let left: u64 = 1 << 50;
// 1099511627776 is 1 << 40
if left > 1099511627776 {

This compiles today but will fail to compile with the additional implementations, modulo some other change in inference or integer resolution. (I.e. a compiler change.)

Edit: And here's a version that compiles today, but would have a different result with the implementations.


let left: u64 = 1 << 39;
// 1099511630000_u64 is greater than `left`
// 1099511630000 as i32 is 2224
if left < 1099511630000 {
} else {
1 Like

@CAD97 and @quinedot already gave examples of where this would change the meaning, so I'll throw in one more on top; I use the fact that you can't do this today to verify my own code. The different integer types can mean different things, so if I'm feeling lazy and haven't created a newtype for something, I rely on the fact that you can't do these comparisons to check my code at compile time (FYI, I know that this is a Very Bad Idea™, but I did preface that this is for when I'm feeling lazy). So for me personally, doing this would be a breaking change.

As an alternative, have you considered the derive_more or newtype_derive crates? Although they aren't a one-stop shop to magically make a newtype that is 100% like an integer, they do handle a lot of boilerplate for you, which will allow you to implement your own comparisons on top of your newtype.

This is a rather broadly-impacting change, so I'd say yes.

(It's very different from, say, just adding a method to Vec.)

This is the biggest reason. See

I wish we could do it, since not having it means that people are incentivised to do their comparisons incorrectly. But like with indexing, we can't right now.

(Maybe once we get chalk?)

1 Like

I think some kind of inference hinting system would cover both of those with or without chalk. It should maybe get its own RFC?

I think there's no appetite to change inference rules until the compiler has moved over to the new system where it would be easier to do so, though. So while it would certainly be possible to write an RFC about it, I suspect it get postponed until t-compiler was in a position where they'd be willing to actually implement it.

They might, but injecting these newtypes would generate a diff so huge (I suspect > 1000 LOC) that I'm pretty sure I wouldn't spend my time on. I don't think the newtypes would play into readability/writeability either.

That settles it.

Thank you everyone for your replies.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.