Documentation of certain traits specify that implementations "must" do certain things. For instance, PartialEq says equality "must" be symmetric and transitive. Similarly for Eq, PartialOrd, Ord, Hash etc.
I think this would be better expressed as "should".
Various technical documents (such as the C++ language standard) use the "must", "should", "shall" terminology as defined in IETF RFC 2119. There, "must" is defined to mean:
is an absolute requirement of the specification
So people reading the documentation of these traits may assume that these properties are absolutely guaranteed to hold, because it says "must". This would make sense if these were unsafe traits, but they are not unsafe, so it's not an absolute requirement, it's more of a recommendation.
It is a requirement, code that exposes incorrect PartialOrd or Ord implementations can be considered to be a bug. It's just also a requirement that unsafe code not trigger undefined behavior in the presence of such buggy and incorrect implementations (but it can still panic, crash, loop forever, etc.)
"Must" seems to indicate "this will always be true"
In reality "this will always be true" is never the case; standards are implemented by humans and humans make mistakes. For example, if I run a web server that sends randomly generated nonsense to clients instead of proper HTML, then I am in no way compliant with the HTML spec. Yet it's also expected that the client's web browser would react with a proper error message, as opposed to, for example, deleting all the user's files. Similarly, there are some minimal expectations for how unsafe code is supposed to behave in the presence of buggy safe code, but that doesn't make the safe code any less buggy.
It's an absolute requirement for logical correctness. To me "should" is for "safe code has to handle this misbehaving too".
For example, Iterator::size_hintmust express a range containing the actual count() of the iterator. And because that's a must, I'm allowed to do things like "oh, the upper bound is Some(0), so the iterator must be empty" and thus not call next, even though in a misbehaving implementation next might return something.
That's different from if it were just that it should contain the actual count -- at that point I have to call next anyway to see whether it's empty, regardless of what size_hint said, and thus I can only use size_hint for things (like reserve) where incorrect values aren't logic errors.
That said, the implementation should provide a correct estimation
Incidentally, I would improve this documentation -- just saying the implementation should return "correct" values isn't really saying anything. All functions should always return correct values! I would write this as:
The range should contain the number of items subsequently yielded by next().
Or not write anything, because it's already implied by the first statement in the documentation:
Returns the bounds on the remaining length of the iterator.
I don't think you have to call next. The only reason you would assume that you have to is because the method is confusingly named (misnamed), the word "hint" makes it sound not authoritative. It shouldn't be called size_hint, it should be called size_range. If it was called size_range it would be clearer that it's OK to stop the iteration as soon as it says Some(0).
If the iterator has already told you there are no more elements, regardless of whether it has done so via a next call or via a size_hint call, I see no reason to think you have to ask next again to make sure. The iterator can change its mind if you redundantly ask it again, but it doesn't mean you have to ask it again. Just like next can return None and then Some if you ask it again, similarly size_hint can return Some(0) and then change its mind if you ask it again, but that doesn't mean you can't stop the iteration after the first query.
True, but it is not expected of the web browser that it attempts it's best at repairing the garbage rather than giving an error message or silently ignoring it (the later can be important for security). Note that a web browser is not the best example though, as the spec's are very permissive to the point that html5 has the parser recovery algorithm specified following Postel's law. Postel's law has been found to be a mistake as argued in for example https://datatracker.ietf.org/doc/html/draft-iab-protocol-maintenance. Instead it has been found that actively rejecting data that is in violation of the specification (in other words that are inconflict with a MUST rather than a SHOULD) improves robustness as it forces the producer to fix their mistakes rather than forcing their mistakes on every other (future) implementor of the specification.
In general, I don't think it makes sense to say user code "must" do something, unless there are actually no guarantees about what will happen when the code doesn't do it.
For instance, you may implement == to behave as < on your type, i.e. not in a symmetric way. Does Rust allow it? Yes, it allows it -- we know what will happen, it will simply work as a <. So it is not useful to say "you must not do this". It makes sense to advise "we recommend you don't do it", but "you must not do this" sounds as if it's a precondition and we provide no guarantees if you violate it, but it's simply false, it's not a precondition, the language doesn't require it.
It also makes sense to say "HashMap may panic if == is not symmetric" etc. But that's for HashMap documentation to say, not for == documentation. == will work fine if it's not symmetric.
For comparison, C++ specification doesn't say that "your < must be transitive". It's allowed to be non-transitive. std::sort will not work if it's not transitive, but that's sort's precondition, not <'s precondition.
It's not required on the syntactic level, but it is still a hard requirement of the language. Other code can rely on that requirement for correctness,
If you implement PartialEq incorrectly, then all other code is allowed to behave in almost arbitrary way. My HashMal may return garbage, loop indefinitely or panic on an element lookup. The only hard requirement is that memory safety must not be violated, which is neither a requirement of HashMap nor PartialEq, but a general constraint on unsafe code: it must not cause memory unsafety regardless of the behaviour of safe code,
If you say that the requirements on PartialEq are "should" rather than "must", then I cannot rely on it being true. I must separately specify for all my functions which failure modes are allowed for incorrect trait impls, and I must do extra work to uphold those guarantees. Either all of my functions, must be explicitly specified to result in arbitrary (but not Undefined) behaviour, or I must provide some sanity guarantees like "hashmap lookup always terminates", and do extra work to make sure it's true, like cache the results of comparisons or make sure that the algorithm has a hard upper runtime bound for any impl.
Since there is an infinite number of ways an impl can be broken (e.g. who says it has no side effects?), it is unreasonable to constraint the behaviour of its consumer code without putting hard constraints on the impl itself.
Really? That would be really weird (and unfortunate). So I implement PartialEq on my type to return random values, maybe call it a few times to demonstrate it, then I print "hello world", and I can't know whether my program will actually print "hello world"?
I don't think that's how one would want to specify the language or std to work. But you're right that this is what making it a "hard requirement" on PartialEq would imply.
Maybe you can. That depends on the exact code you wrote. E.g. effects of calling println! will not depend on correctness of PartialEq, since it knows nothing about it. Simply calling PartialEq::eq on your type will also have only the effects that you have specified in its implementation, because the code is still well-defined. And you can always just test and see, and it should be reproducible (given the same seed of the RNG), precisely because your code has no Undefined Behaviour.
But you cannot be sure that a HashMap will behave correctly without testing (or reading its source code), nor can you be sure that the behaviour won't change in the future version of stdlib.
If a data structure relies on PartialEq's API contract and an item violates the contract it might end up looping infinitely (even in a method's API says says it'll return), abort or do other funny things and thus hello world will never be printed.
If contract A relies on contract B and contract B is violated then A might not be upheld either. Therefore, if we want bug-free programs, it is a requirement that API contracts are upheld.
The thing about unsafe is not that the unsafe functions is that they cannot rely on certain contracts. But if they do rely on a certain contract and that is violated then the unsafe function can cause UB. This is more a question of which trait implementations unsafe code can trust, not whether the failures have transitive impact.
While rust cannot enforce the stipulation that the Eq implementation obeys the laws specified,
And I doubt it is on anyone's radar resolve that in std itself, verification tools may some day be able to enforce that the trait implementations obey the laws specified. Prusti even uses this exact case as an example. Trait contract refinement - Prusti user guide
I wouldn't focus on the what will happen if a law is violated, currently callers are allowed to assume axiomatically that the laws do hold. If we change them now to future verification tools will have no contract to actually verify. All existing callers to these traits would have their assumptions invalidated. I cannot stress enough how massively the latter diverges from current practice such that changing it seems rather untenable.
That's because (the definition of these) traits aren't part of the language. The requirements of a library trait are purely a library concept.
It is correct to say that an implementation of EqMUST describe an equivalence relation. If the implementation does not, it is not a conforming Eq implementation.
In fact, I can argue that trait documentation uses MUST in exactly the RFC 2119 way. Even though ISO 8601 says that I MUST NOT write the date January 31st, 2022 as 2022-31-01, I can. I just did. I haven't broken any fundamental constraints of the universe. My HTTP server can return 413 IM A TEAPOT despite very clearly not producing tea nor even holding water.
It's perfectly valid for safe traits to say that implementations MUST do something. The clarification is what it means to be noncompliant. If a library API wants to conform to the sound qualifier on Rust programs, it MUST tolerate noncompliant implementations of safe traits.
where for the purpose of the sound qualification, tolerate merely means not performing an operation forbidden by the Abstract Machine, and any further guarantees are left to QOI ↩︎
To me, the word must tends to constraint the condition more than the result. In my experience, when people write A must do sth, it means they require you to do this, rather than they promise A do sth, for the latter case I think A will do sth is more intuitive and popular.
It's fundamentally hard for a language to constraint the behavior of a function, even if you have contracts baked in the language (like D lang), the best you can do is to rely on complete tests and static checkers. However I don't think this should prevent us from using must, because should is much less stronger than must, and it feels very uncertain if I found that the documentation of something says should everywhere.
Not all technical documents use or should use RFC 2119. The C++ Standard very much does not use 2119, and defines its own terms (it uses shall exclusively, which it defines as being a constraint on either the implemenation or the program).
Also, both unofficially, and as amended in 8154, the 2119 terms are specifically when written in uppercase, as "MUST" or "SHOULD".
For comparison, C++ specification doesn't say that "your < must be transitive". It's allowed to be non-transitive. std::sort will not work if it's not transitive, but that's sort 's precondition, not < 's precondition.
I think that is a terrible mindset to have. It means that noone can call sort with something that implements the < operator without manually checking that the < operator actually compares the values correctly for that type.
It would make infinitely more sense to place semantic constraint on traits like PartialEq or Ord. After all, if I am writing a method like a hashmap lookup and it takes an equality function, I expect that the function that I get wil actually be an equality function when I define that must implement the PartialEq trait.
If I wanted to work with just any function that takes 2 values and returns a boolean, I would accept a &dyn Fn<T, T> -> bool instead. So, my question is: Why even have traits like PartialEq or Ord if we are not going to place any rules on the implementation logic and any bugs resulting from some PartialEq not actually being a partial equality are going to be the fault of the consumer who expects it to be a partial equality instead of the provider who didn't implement the PartialEq as an actual partial equality?