Propose new operators to reserve / include for edition 2018

Pretty standard hand-written Lexer.

I’m not convinced we would want to do this. I think a starting point would be to only accept the specific Mathematical Symbols Code Block Entries (no duplicates/aliases) and only add aliasing/duplicate allowance if it was truly deemed necessary/ergonomic/useful (I’m not convinced it would be, but, we should definitely think about it IMHO).

I’m not sure we wanted to either, but, I feel like it could be in the realm of possibility. Not ALL though. There might be domains where some arrow characters as operators could be really useful, but, I don’t have specific use cases in mind yet. I’d like to keep an open mind about it though. I prefer not to arbitrarily rule things out in the early design phase on something. I’d like to hear from perhaps domains that aren’t so Math Oriented about what their opinion on the matter is. In the end though, you are probably correct and that would be my expectation as well.

EDIT: BTW, when I see long lengths of boiler-plate code like this I immediately imagine just putting stuff in a spreadsheet or CVS file and writing some simple code to generate the boilerplate (fairly simple) - that is assuming you didn’t want to use Serde or other Macros to be compatible with the hand-written nature of the current code (assuming it is hand-written and not generated already).

Maybe this is a bad idea but here it goes… What about a ~ binary operator that compares floats, so you would write res = a ~ b;, here the comparison should not be exact (i.e res = a==b;), but taking an “off by x” margin where x may default to the machine epsilon… And while we are at that, let’s allow the user to set the value for x via something like let x_lifetime = std:ops:loose_comparison::set_x(x: float), so the value for x would be the valid in the lifetime of x_lifetime… Pls forgive my grammar as english is not my native lang…

Comparing floating point with a tolerance of machine epsilon is wrong. It is never what you want, and is in fact the same as checking for exact equality.

Machine epsilon is the smallest representable float. This is also the smallest possible difference between any two floats, and any two floats will be at least an order of magnitude more different if they’re greater than one due to denormalization.

The correct way to compare floats for equality (if you absolutely have to) is with a percentage tolerance. If it’s versus a known target value, use the acceptable variance on that value. If it’s between two calculated floats, reconsider if that’s what you actually want, then maybe just use a percentage of the larger.

That said, I don’t think ~ has meaning today, and either it or a compound with it is great for fuzzy equality on types where that makes sense, or even just as a generic overload for DSL use, and I doubt ~ meaning box/ToOwned::to_owned is coming back.


Yeah my bad, really bad choice on the tolerance default value, but i think the use case is common enough to justify adding a new operator. What i meant is make comparison for floats work like you described by default, via the proposed operator or modifying the behavior of == for floating point types, so we can all forget about defining the compare_float(a,b) :).

If the == operator gets modified then a lot of code could break, but also, how could we set a default acceptable variance and make sure that no existing code breaks?

I’m not really sure about the choice of the ~ operator, maybe some other notation would be better, the idea is to communicate to the reader that code using this is not comparing bit to bit.

How about the lexeme ~=, which most people would deduce means “approximately equal”. Presumably that’s what you intend for the semantics of this operator.

What is your intent with regard to NaNs? Are they comparable under this scheme, or to they return a comparison Result of None?

Another “advantage” of using ~= is that at least some programming ligatures (Fira Code / DejaVu Sans Code) render it as ≃ (Asymptotically Equal To, but most people would interpret it the same as ≈ Almost Equal To).

This would be an equivalence relation, so NaN ~= NaN would be true or false, and I think sticking to partial equivalence per IEEE and returning false would be the correct answer (and the simplest, where ~= is defined as | lhs - rhs | < ( max[lhs, rhs] × tolerance )).

(That said, hashing out the implementation details of ~= is off-topic here; this thread is just to suggest reserving the operator.)

Please please please no magic global configuration values, especially for operators.

And that implies that it’s a ternary operation (to take the tolerance), at which point I think a method is fine.


@scottmcm I meant sort of an “overridable” (is that even a word?) global.

Edit: What about something like std::ops:loose_comparison::<f32>::set_tolerance(tolerance: f32)?, so the tolerance can vary with the type. I’m with you on the no globals idea, but then how would the set_tolerance feature be implemented?

I understand, but that’s what I’m objecting to. It leads to different libraries wanting different values, which leads to drop types to set and unset the global, which leads to it being thread-local, which leads to problems in async-await, which …


Hmm… a == b +/- 0.05 or a == b * (1 +/- 0.05). Though, if you’re actually constructing a range, then comparison isn’t really the right operator; you’d need an element-of operator, like x in 1..5 or something.

This discussion is going in a weird direction.

Any kind of reasonable general floating point comparison operator is impossible. You need to know the properties of what you’re comparing, and answers to questions like “could there be cancellation involved?”. Sometimes you need relative tolerances, sometimes you need absolute tolerances, and sometimes there’s no substitute for ULPs.

Floating point math is really hard, and nothing you can do will ever make it easy.


This might be too much, but how about a logical xor operator?

Though rarely a problem, its often semantically difficult to actually represent logical exclusive or in many languages.

I propose that the operator would look like this: >< ie a >< b

see here the issues with current xor

Rust has logical XOR:

a != b

read the link man, come on…

Yes, XOR is just inequality.

no it really isn’t… just look at the so answer I linked why that isn’t the case.

  1. I’m not sure what SO did, but apparently my browser doesn’t scroll to the linked answer (it merely flashes red when I scroll near it, which is confusing).
  2. I see something about sequence points, but this seems irrelevant to Rust? ISTM order of evaluation is well-defined in rust.

Why though? First, that question and answer is about C++, not Rust. Second, it states that != is xor without a sequence point (?). Third, Rust doesn’t really have sequence points, the evaluation rules are formulated differently. Fourth, the reason it seems to consider sequence points is the use of xor with side-effecting expressions, which are also not idiomatic Rust to say the least, and most of them can’t even be used as booleans, given that e.g. assignments have type (). (Of course you could come up with a bool-returning effectful function, but at that point, why would you even write such code…?)


This thread reminds me that there is something quite unclear about the whole operator story in Rust and I think this should be clarified before adding new operators, in particular overloadable ones. It seems there exist two incompatible views of what an operator trait is:

  1. As for every other trait in Rust it represents a well defined (set of) semantic operation(s) that just happen to be aliasable with predefined sigils.
  2. It is just a workaround to allow the use of sigils when dealing with specific types. However it enforces no strong semantic on what the operations mean. Basically library authors are more or less free to equip their types with operators they like (but with of course the same readability and least-surprise constrains as with any other method name).

I suspect many of us consider the answer is obviously 1. because most of the time this interpretation works well. But there are already obvious defects:

  • Trait Div does not correspond to the same operation when applied to integers or floating points. I don’t think one would express anything useful by specifying a bound where T : Div.
  • Add for string concatenation have little in common with the addition of numbers. Note that the monoid argument does not hold given the current state of the standard library. First because as already noticed multiple times here, a multiplicative operator should be prefered for the operator of any non-commutative monoid. Second because LinkedList and any sequential collections (and more generally any free monoid) should also have a concatenation operator if we want consistency.
  • BitAnd and BitOr are implemented for sets, which could be considered consistent with bitwise operations on integers if we are considering them to be the operations of a boolean algebra (or more generally a lattice). But in that case the naming is rather inappropriate and should be something like conjunction/disjunction or join/meet.
  • The ? operator is still sometime refered to as the question mark operator. I know there is now a dedicated thread to rename it but this still shows there is a temptation in Rust to think of operators in term of sigils rather than in term of operations.

Note that if we stick to intepretation 1. it should not be impossible to have several operator traits using the same sigils if their semantic are considered too divergent. That may not be desirable most of the time but it is not different from homonymous methods in distinct traits.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.