I know that Option::or and Option::or_else exist, but imagine you're trying to find and transform the valid values in a sequence of them:
let mut ret = Vec::new();
for value in sequence {
let foo = match value.foo() {
Some(v) => v,
None => continue,
};
let bar = match value.bar() {
Some(v) => v,
None => continue,
};
ret.push((foo, bar));
}
ret
Normally you have to use:
let bar = match foo {
Some(v) => v,
None => continue /* diverge */
};
Option::or and Option::or_else can't help with this.
If you could somehow do OR operations on Options and possibly Results, you could do this:
let bar = foo || continue /* diverge */;
and the compiler would detect that the other expression diverges.
This can of course be done with macros (as almost everything else), but it feels like unnecessary boilerplate.
// Vec::from_iter(sequence.into_iter().filter_map(Value::foo_bar))
// or:
let mut ret = Vec::new();
for foo_bar in sequence.into_iter().filter_map(Value::foo_bar) {
ret.push(foo_bar);
}
ret
| is definitely not lazily evaluated, and can’t be changed in any edition because it would silently and possibly drastically break code by starting to skip expressions that have side effects. You’d need the hypothetical overloadable || (which is not overloadable exactly because you can’t make arguments to function calls lazily evaluatable, among other reasons). Option could have a magical compiler-provided &&/||, but realistically nobody wants more magical library types.
This, but also, nobody wants types whose operator overloads perform control flow that's more than an optimization.
With lazy-and and lazy-or, technically there's control flow going on, but it's just to shortcut the evaluation of later parts that are known not to be needed i.e. it's a performance optimization.
With an operator that can do things like continue, the semantics become more complex: can it still be used outside of a loop, for example? Either answer yields new questions, and the story doesn't get simpler.
It's very much not - you can rely on it for side effects that happen or don't, or even to guard against UB. For example:
// Deliberately contrived Unsafe Code Example
pub fn foo(x: Option<&bool>) -> bool{
// Safe because of option layout guarantees, we either get `null` or the reference cast to the pointer
let p: *const bool = unsafe{ core::mem::transmute(x) };
return !p.is_null() && unsafe{ p.read() };
}
This code unconditionally has defined behaviour - there is no possible inputs to the function that could cause UB (without having already caused UB in the first place, by violating the requirements of the &bool inside the Option). If we were to instead use &, the code instead has undefined behaviour when called with None, as p.read() would be evaluated with a null p and that's UB.
The same goes if we do something like print or panic in the second operand. x && dbg!(true) will print true = true iff x is also true, and x || panic!("Assertion Failed: {x}") would be a perfectly correct way to handle an assertion.
You can also write arbitrary files in procedural macros, and the same thing goes there: you really shouldn't do that.
It's likely to be surprising to someone reading that code. Because of that, I'd call using side effects within the lazy operators a slight abuse of those operators.
Honestly, I'd refactor your example immediately to make the side effects more obvious if it was in a code base I was involved in.
From my understanding, preventing UB is the primary reason that C introduced mandatory short-circuiting in the first place. So that you could write something like if (i < buf_len && buf[i] > 0) in a language that doesn't have automatic bounds checks.
Given this history, it doesn't seem like an abuse of these operators to combine a safety precondition with some kind of check that relies on it. Of course, like any other complicated one-liner, it may be a good stylistic choice to break it up into multiple statements. But that should be a case-by-case judgment call, not a blanket prohibition.