Pre-RFC core::ops traits for relational operators

@kornel I'm not sure which argument you are trying to make here.

Being able to design your own DSLs allows you to design both good and bad DSLs. In Rust, << is already a purely syntactic operator, so implementing C++ <iostream> in Rust with the same API is already possible.

You argue that such a DSL is bad, and therefore, < should not be allowed to be syntactically overloaded. But since this is a problem that we already have, I argue that allowing to overload < does not only make the language more consistent, but it also makes this problem better, since it allows designing better DSLs.

If Rust wants to enable more syntax sugar, I suggest allowing custom infix functions/operators, so that users can define their own new operators with their own semantics, without creating ambiguity around existing operators. a vecmul! b or a v* b or a \* b or a ⊗ b , etc.

Since we are talking about translating the mathematical vec < vec operator to Rust, how would you add this particular operator under this proposal, and how is that better than using < for it along the multiple axes that make a DSL good (actually representing the language it is supposed to represent, in this case, math; being intuitive, discoverable, etc.) ?


This would work terribly with #[derive(PartialOrd)] and generic impls.

Can you explain why? AFAICT it would work perfectly with derive(PartialOrd) and generic impls.


For me the big issue with all such syntax-extension proposals is their justification: How much harder does the proposal make it to learn and use Rust for the average user, vs. how much benefit does it provide in actual code use?

I maintain a widely used crate and being able to use < for comparing SIMD vectors is one of the most requested features by users.


If this weren't the case and these operators use ( partial_ ) cmp directly, this proposal could hold more weight. As it is, since these operators already have an overload path, arguing for adding another one is going to take a lot of doing.

Since the proposal actually proposes an implementable solution, whose implementation is actually extremely simple, and fully backward compatible with PartialOrd/PartialEq, I suppose the proposal misses something that you consider obvious. Could you elaborate on what that is? EDIT: maybe by "a lot of doing" you are not referring to implementation issues as I understood at first, but to process issues like achieving consensus on an RFC?

Rust so far has generally taken a position of "don't overload operators to do anything other than the obvious meaning that has been drilled into your head since elementary".

Can you mention for which operators this is the case? AFAICT, all Rust operators that come with an associated Target type (that's all of them except for the relational operators) are purely syntactical and allow you to do something non-obvious and this is exploited by many crates, including the standard library. So if anything, the position that Rust has taken is completely the opposite to the one you claim.

Note that the OP example also requires bool - bool : {integer} , which I don't think would ever happen.

This is incorrect. The OP example requires a < b to have a customizable output type, which is very different from this claim that the output type must be bool. In particular, for SIMD vectors, (a < b) does not return a vector of bool, but a "vector mask", which is a vector of the same type as the ones being compared. Adding an implementation of Sub for {vector mask} - {vector mask}: {vector} is already possible with the current Sub trait design, so this wouldn't be an issue. That is, if you think that bool - bool: {integer} would be bad, then this is a problem that we already have, because it is already possible to write a Rust library with a struct Bool(bool); type such that Bool - Bool: {integer}. The only thing this RFC allow is syntactically overloading relational operators, such that if that library has a struct Int(i32); it can implement < for it such that Int < Int: Bool. The only problematic part here is that, if you do this, there is no way to make Int be PartialOrd, but fixing that is an orthogonal problem.


Agreed. Adding more stuff to the core that enables arbitrarily sprinkling operators on types will cause a bigger mess.

Especially with Ord/PartialOrd/Eq/PartialEq involved: I'd rather have those traits quarantined until people have figured out how to fix them, instead of adding more things that interact with them.

1 Like

Can you explain what you think needs fixing?

According to your post, LessThan is for <=, not <.

However, I would prefer a name like LessOrEq, because Less and LessThan mean the same thing (I'm not a native English speaker, please correct me if I'm wrong).

EDIT: The provided methods in PartialOrd are lt, le, gt and ge, so the names LessThan, LessEq, GreaterThan and GreaterEq would be most consistent.


Sorry, that was a typo, it should be LessEq for <=, and Less for <. Fixed it in the OP, thanks for pointing this out!

1 Like

My argument is that reuse of operators for unrelated purpose, just because they happen to be a syntax sugar for a DSL, makes confusing APIs and shouldn't be encouraged. Therefore, PartialOrd is already better than LessThan, because it can't be abused for things that aren't comparisons.

I do acknowledge that there's a need for syntax sugar and DSLs, but I'd prefer to have new operators/new syntax for these cases.


So what exactly do we win by not doing this? We can't teach that operators have a purpose, since the whole majority of operators do not. Even for PartialOrd, we can't even explain formally what the purpose is, we don't check that implementations satisfy this purpose, and users can't expect these operators to actually satisfy such purpose while writing Rust code and need to defend themselves for when they do not.

I'd find it simpler to teach "Operators are syntactic, if you want to know what they do for a particular type, read it's docs", than to teach all of the above.

I agree with the sentiment that it would have been interesting to explore what it would have meant for Rust operators to actually have an algebraic meaning, but that's something that we would have to have done before 1.0. That ship has sailed.


There's a whole world between "prove and guarantee it's always a well-implemented comparison" and "give up and let it do whatever". Currently it's intended to be a comparison, and is generally used as one. It's semantic in practice, even if that can't be proven in theory.

If I see < I can read it as a comparison without looking at the docs, and IMHO that is a huge benefit that is worth preserving.


I still disagree that this is worth preserving, because I don't find the claim that this adds that much value convincing yet. But I understand this opinion, and I think it is a perfectly valid one to have. I'll try to properly account for it as a drawback, since I think many users will have the same opinion that you have. Right now, when you see a < b, you can reason about both a and b both implementing PartialOrd (or Ord), and this proposal does not let you conclude that anymore, requiring you to look at the types instead to see if that's the case.

1 Like

Sure! Those traits are structured in a way that answer an uninteresting question, while making it impossible to answer the interesting one.

Rust's trait hierarchy (PartialEq <: Eq, PartialOrd <: Ord) renders it unable to deal with the fact that e. g. floating points have two distinct and incompatible orderings: a partial order (useful for comparisons) and a total order (useful for sorting).

(The total order (§5.10) is of course also a partial order – but that partial order is not the same as the partial order usually associated with floating point operations (§5.11).)

Rust is basically forced it to pick one (partial order); unable to express the other one (total order).

Ideally, Rust would have clearly separated traits whose names clearly reflected their different purposes:

  • Eq: partial order, §5.11; i. e. NaN is not equal to itself
  • Id: total order, §5.10; i. e. NaN is identical to itself
  • Cmp: partial order, §5.11; useful for comparisons
  • Srt: total order, §5.10; useful for sorting

I hope this clears things up! (As this is rather off-topic, feel free to send me a message if you have further questions.)


a < b does not require that b implement PartialOrd.

use std::cmp::{PartialEq,PartialOrd,Ordering};

struct Foo;
struct Bar;

impl PartialEq<Bar> for Foo{
    fn eq(&self,_other:&Bar)->bool{

impl PartialOrd<Bar> for Foo{
    fn partial_cmp(&self,_other:&Bar)->Option<Ordering>{

I'll only say, given the extreme and painful work the C++ committee put towards specifying operator<=> so that we could stop fussing about with the other six operators, makes me very weary about an attempt to reproduce the C++20 behavior in Rust.

That, and I'm very happy that DSLs are painful to write in Rust. I think this is a feature of the language and we should keep it around; unfortunately, I think this view is only held by people who have had to read many other peoples' DSLs. And I say that with a background in stable homotopy theory, where we spam symbols like there's no tomorrow.

Rust is sigiltastic as it is and I really hope to avoid making that worse.


From the std::ops docs:

Implementations of operator traits should be unsurprising in their respective contexts, keeping in mind their usual meanings and operator precedence. For example, when implementing Mul , the operation should have some resemblance to multiplication (and share expected properties like associativity).

There is no enforcement that the mathematical operators follow the elementary meaning of the operator, but it is heavily suggested that implementations should.

As I said, the question under discussion here is

I think I've slightly shifted to the "no, add Target-carrying traits" position (but only very weakly) for the reason of consistency with the mathematical operators. We could've required returning Self from them, which works for all std implementations except &{integer}: Add<Target={integer}>, but we didn't, so I weakly feel that we should allow for comparison operators to be implemented in the same manner.

Even ops::Not (unary !) carries an associated output type.

In the op example, the types in question are (with whatever specialized array type)

[int; N] > int : [bool; N]
[int; N] < int : [bool; N]
[bool; N] - [bool; N] : [int; N]

I think these comparison operators follow the std docs for what operator implementations should do; it "[has] some resemblance to [comparison]".

We already don't (can't) enforce the property that _ > _ is equivalent to !(_ <= _). I don't think we actually lose anything meaningful by relaxing comparison operators in this manner. If you're worried about bad code, they can already abuse the shift operators for the exact same effect.

I think the RFC should include the full text of the added traits and impls under a fold. If we had defaults for associated types, then the comparison operators' output types definitely should default to bool. I also think we should explicitly have PartialOrd<Rhs> : LessEq<Rhs, Target=bool> + LessThan<Rhs, Target=bool> + PartialEq<Rhs> + GreaterEq<Rhs, Target=bool> + GreaterThan<Rhs, Target=bool> and PartialEq<Rhs> : Equal<Rhs, Target=bool> + NotEqual<Rhs, Target=bool>. The covering impls for all the new operator traits when the "group" trait is implemented, of course, forwards to the specific method that previously was used by the operator.

I suppose I should note that this necessarily adds a lot of lang items to the language. That's not necessarily a bad thing, though, as each operator (except for the short circuiting ones) now has an individual lang item trait.

Side offtopic note: should a [int; N] like type have shifts/rotates rotate the array or the individual array members? I suppose it's up to the individual array like type to decide which behavior makes more sense for them. For simd where you're treating it as one number but splating the operation over many numbers, the splat implementation obviously makes the most sense. And abusing the shift operator for comparison is both possible and very bad code.

1 Like

Allowing types to implement multiple partial orders is AFAICT orthogonal to allowing < to return a value different than bool.

It is already possible to allow a type to implement multiple partial orders. The way this is done is by just wrapping the type, e.g., for f32, you'd do Id(f32) or Total(f32). The packed_simd crate uses this, e.g., for offering partial orders on SIMD vectors, for which many partial orders exist. For example, you can execute i32x4::lexicographical_partial_order() to obtain a wrapper over the type that implements PartialOrd with that particular order.


I'll only say, given the extreme and painful work the C++ committee put towards specifying operator<=> so that we could stop fussing about with the other six operators, makes me very weary about an attempt to reproduce the C++20 behavior in Rust.

The exact same C++ committee members that worked towards approving <=>, simultaneously recognized the usefulness of overloading the 6 relational operators in ways that do not return bool nor implement any kind of meaningful ordering by approving std::experimental::simd within the same standardization cycle, which uses them in this precise way.

The point of <=> isn't to "prevent DSLs", but rather to help people handle the most common case of wanting the relational operators to implement a particular ordering, for types for which it makes sense to do so, since achieving that by having to write 6 overloads is cumbersome and error prone.

For many types, that goal does not make sense, e.g., because there are a large amount of orderings that could be implemented and none of them is a clear default (like SIMD vectors, or floating point numbers), or because there are no meaningful orderings that could be implemented. Yet that does not mean that implementing the relational operators with other semantics isn't useful.

The assumption that a < b for a particular type must semantically mean either a partial or a total ordering relation between a and b is incorrect.

I don't want to derail this topic any further; just a short reply that this approach completely crumbles abstraction-wise for anything but toy examples (bare floating point values).

Usually one operates on some struct, or even just a generic type with some constraint.

In the former case, it would require manually inspecting each member of the struct, and manually defining structs for each level, e. g.: Person { name: "Jane Doe", shoe_size: ShoeSize { value : 23.5f32 } } –> TotalPerson { name: "Jane Doe", shoe_size: TotalShoeSize { value: } }.

In the latter case, it's impossible. There is no Rust function/operator/macro that implements "take an arbitrary T of unknown members and levels of nesting and return me a TotalT that has the same members and levels of nesting, but where every float field is replaced by a total-float field; ideally using existing implementations of {Partial}Eq and {Partial}Ord from the input types, except for those fields we replaced".

Even if both cases were indeed possible, it would still open up new avenues for bugs, as it is quite easily possible to forget this conversion and pass the non-total versions into data structures without complaint.

TL;DR: It doesn't work. Haskell had the same problems for decades. If this approach was viable, it would probably have been put into practice for years already.


I think you're conflating the two points I'm making:

  • The coexistence of <=> and the other relational operators in C++ is a complete mess.
  • Rust already makes it hard to override the semantic meaning of a number of operators with respect to each other. If your type's comparison semantics really are non-trivial, add methods, rather than overloading expectations. This is similar to how implementing addition for string types in Rust was probably a mistake.

Also, I'll further point out that trying to draw conclusions on the C++ committee's views based on ratification of different features, which tend to be drafted in isolation, isn't a great idea. ISO C++, in my view, is not a great example of a principled design process. See std::view as an egregious example.

Why? In my view, it is a mistake to decouple the six comparison operators from each other, and Rust is one of a few languages that seems to have gotten this right.


It is not impossible to implement PartialOrd such that it doesn't respect "reasonable" behavior. It is a valid implementation to return rand() here as well.

It is the same class of error as implementing ops::Add to return some unrelated value of an unrelated type. Nonsensical, bad code, and expectation shattering, but perfectly allowed by the language.

That a < b compiles does today mean that !(a >= b) will compile, but says nothing about the result.

The problem is that there is an existing expectation that those constructions are identical, so no matter what you're going to introduce new forms of confusion.

As far as I can tell, the motivation for this can be solved in other ways that don't involve decoupling the six comparisons.

  • For SIMD types that want to output a mask vector or similar for their comparisons, rather than a boolean, you can introduce something like
trait WeakEq {
  type Output: Not;
  pub fn eq(&self, &Self) -> Self::Output;
trait WeakCmp: WeakEq {
  pub fn lt(&self, &Self) -> Self::Output;
// Blanket implementations for bridging from Eq/Ord are obvious.

The expectation, of course, is that for SIMD, the compiler is able to aggressively inline these and macro-fuse the resulting calls to std::arch intrinsics in the obvious manner, so that !(a < b) reduces to a >= b at the instruction level.

  • For floats, well, I don't think they should have been PartialOrd/Eq in the first place (see also: fnptrs are Eq, which is also probably a mistake). I don't see what this proposal solves for floats that isn't some kind of breaking change.
1 Like

I'd like to share here that in numpy the behavior of comparison operators is indeed terribly unexpected to students (I teach computational physics), and is in my opinion one of the worst features of the DSL. Students who make the mistake of comparing arrays inevitably need help up extricate them from the situation. The result ends up in an if or while statement, and the resulting error message gives them terrible advice.

You might argue that it wouldn't be so bad with rust because we have the type system to help us (e.g. we won't ever have arrays unexpectedly passed as arguments to a function), but the error messages will still be a challenge. Adding another half dozen traits will make it harder for the compiler to guess what is missing when we use a comparison incorrectly.


This is kind of disappointing to see on its own, because while certain aspects of numpy as a DSL are initially confusing (comparison being one among many), array-aware operators like the ones proposed are extremely useful, convenient, and expressive once you get used to them. That said:


I see the pre-RFC being discussed as "if we can't get custom operators can we at least improve the status quo without them", but custom operators would be pretty fantastic.

1 Like