Am I the only one confused by `a.min(b)` and `a.max(b)`?

let a = f64::min(foo(), bar());
let b = f64::min(foo(), bar());

Here, the meaning is extremely clear. Take the minimum (or maximum) of the value returned by foo() and bar().

However, when using the postfix notation, I find it much harder to read:

let value = foo()

Instinctively, I try to read it as:

Compute foo(). The value will then be at minimum the value of bar(), and at maximum the value of baz().

If this seems right, re-read it, it's completely backward. The maximum is the value of bar() and the minimum is the value of baz().

Am I the only one in this case?

This is very minor, so I don't feel that it would be justify to introduce two new functions at_most() and at_least(). I'm relatively sure that such change is breaking, so it could only be done at an edition barrier (which increase the bar even more). But I still think it would make the code much less prone to be miss-read.

let value = foo()



I didn't knew about it, and had to open a dictionary to understand what clamp means (I'm not a native speaker). But still, that's good to know.

1 Like

I have definitely stumbled on this syntax before aswell, but I just remember that

a.min(b) means "the minimum of a and b"


a.max(b) means "the maximum of a and b"

But I can also see how the reverse would make sense, if you think of b as a limit. But I guess changing this now would be even more confusing and of course a breaking change.

I get confused by this too. I'm looking forward to clamp().


I'm also looking forward to clamp(), but I've come to realize that even in the single-use cases I find the postfix version of min() and max() much harder to intuitively understand.

1 Like

I always call min and max as T::min(a, b) as well.


Just spitballing here, as I'm not even sure I'd want this, but would it be possible to special case this in the compiler to prevent calling the method on the object? This would of course be done in an edition. The behavior is already present for the Drop trait, though admittedly for a very different reason.

The trait methods could be deprecated in favor of std::cmp::{min, max}, but that affects all editions. Maybe we could add those to the next edition prelude though, since that's one advantage of Ord.

I don't know if this confusion was considered at the time, but here's the original issue:

(checks issue author) But it's your fault! :wink:


To clarify my previous message, I wasn't suggesting deprecating the trait methods, but rather making them uncallable directly in a new edition. Currently, if I have T: Drop, you can't call foo.drop(), but rather have to call T::drop(foo). Presumably something similar could be done for Ord::{min, max} if it were desired, requiring T::min(foo, bar), disallowing foo.min(bar).

1 Like

No, you can't call Drop::drop at all. The prelude method is std::mem::drop, which is literally just:

pub fn drop<T>(_x: T) {}

Hehe. Yes :slight_smile:

The original problem remains: std::cmp::{min, max} doesn't work for floats. If you want to deprecate .min()/.max(), you'd have to make std::cmp a real alternative. .partial_cmp().unwrap() is not good enough.


Have you seen this? Not sure if there's a plan to stabilize, but total ordering is implemented for floats, albeit not via the Ord trait.

1 Like

And correctly so, since the IEEE totalOrder is inconsistent with the existing PartialOrd.

Whenever I run into confusing method chains, I tend to break them up. I'm not sure if that's encouraged in Rust (e.g. if rustfmt/clippy would want to recombine them), but it does work:

let least = foo().min(bar());
let most = least.max(baz())

In practice, if I'm using limits, I'll call them that:

let mut foo = 16;
let upper_bound = 10;
let lower_bound = 0;
foo = min(foo, upper_bound);
foo = max(foo, lower_bound);

I do agree on principle that the postfix notation is error-prone, and that clamp is the correct solution to this.


As an aside wouldn't it have been good if the languages we use

  • preserved bit-patterns on save/load (at least those which are legal for floats on a given platform)
  • gave regular <, etc operations IEEE totalOrder semantics
  • provided the partial order only via library functions


No, because that makes (positive) zero not equal (negative) zero, which is also a weird thing. And the f64::min/f64::max operations are also inconsistent with the totalOrder predicate -- I'd rather get INFINITE from max, not NAN. Not to mention that totalOrder cares about the -- very unpredictable -- sign of NANs, which would make a whole bunch of things behave quite oddly.

The current order might have its own oddness, but it's a much better oddness than using the total order everywhere.

1 Like

And not just “weird” but also in disagreement with the IEEE 754 definitions of comparisons.

The root problem is that IEEE 754 comparisons and totalOrder are inconsistent with each other, so you can't have a single mechanism that conforms to both. (Also, using totalOrder for comparison operators would be inconsistent with all other common languages, leading to surprises and bugs when porting algorithms from other languages.)


I also find it confusing, but preventing it seems to harsh. I think a clippy lint would be best.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.