let boxed: Box<str> = "a".into();
let boxed_ref = &boxed;
// none of these work! not even basic Box<str> == &str!
"a" == boxed;
"a" == boxed_ref;
boxed == boxed_ref;
String::from("a") == boxed;
String::from("a") == boxed_ref;
Is the lack of PartialEq implementations just an oversight, or is there some reason why adding them would be a breaking change or violate some orphan rules?
I'd love to see these (as well as for Rc<str> and Arc<str>), but they'd definitely need a carefully reviewed crater run to make sure they don't introduce inference breakage.
Here it is. It doesn't seem to break anything in rustc, but this may need a lot of testing.
PartialEq can't be behind a feature flag. Perhaps this change could be reverted when branching to beta/stable, so that it remains nightly-only for a while?
I think we really should deprecate type inference based on one-impl-only, maybe on an edition boundary. It's just too fragile. Ideally traits can opt-in for this behavior with a marker attribute.
Unfortunately I just realized that one-impl-only inference is very important. Consider: v[0]. Numeric literals if underconstrained default to i32. Indexing with a literal index only infers usize due to the only applicable impl using usize. This is also a major contributing reason to why we don't allow the other uN types to be SliceIndex, since that widely breaks inference as now ambiguous literals change to i32 (which shouldn't implicit tryinto slice index).
I suppose Index and SliceIndex and Into could all be tagged to allow one-applicable-impl reasoning, but more directly relevant is that all operator traits (like PartialEq) also need to be tagged for the same use case with some_usize == 0 so that the literal can be constrained to infer the correct {integer}. (It's just more familiar to see the problem using indexing.)
Because of the necessity for literal type inference, I think the number of traits that can confidently not be tagged to allow one-applicable-impl reasoning to be much smaller than most would otherwise expect.
Perhaps improving the inference specifically around the {integer} pseudotype instead is another option, and one much more constrained than inference over open type sets, but that's still a hard problem (and we have vague hope of opening the set of types that can be created by numeric literal to user defined types).
This rapidly came up in the discussions about the one-impl rule, yes. We'd either need some improvements to integer-literal handling, or some rules for Index and similar, or both.
That said, there are other techniques we could use as well:
An optional, allow-by-default lint for crates that want to opt into turning off the rule in order to be more resilient to type-inference breakage, even before we have ways to make that easier.
An autofix for that lint that allows using the one-impl rule but then fills in the inferred type. Imagine applying all those autofixes and publishing the result, for robustness, while keeping the source in the simpler form.
Experiments with inferring usize for underconstrained integers.
This would obviously be a bigger change, but one thing I think would be cool would be to have an IntegerLiteral type somehow.
Then we could say that, over an edition, x < 0 calls PartialOrd<IntegerLiteral> instead. And similarly for Index<IntegerLiteral> and such.
That would finally let my_u8 < 300 just be true (with, obviously, a compiler warning or error, like today). And you can always go back to the old behaviour by type-annotating the literal.
Inference failure is one of the reasons; the other is the notion that requiring the types to match leads you to either use matching domains for types (e.g. make sure you use a u64 input if you're comparing to a u64 field) or carefully consider the conversions. (That would be a lot more pleasant if we had more specific methods rather than just as.)
This is a tradeoff: on the one hand (assuming we could sort out the inference failures), allowing mixed integer operations would be convenient, and on the other hand, they could lead to potential bugs or code that makes bugs more likely. Thus far, we've largely skipped a full evaluation of that tradeoff on the basis that we don't have a way to avoid having a huge number of inference failures. If we had a way to handle the inference failures, we'd need to give serious consideration to that tradeoff.
I think special support for literals is an unrelated issue. Having it doesn't solve Box<str>'s problems of comparability with non-literal &str and String. It doesn't solve indexing by counters arr[u32_variable].
What if we had a one_impl_inference_default / inference_default / use_for_literals attribute which is set on the currently existing and useful/needed impls, thus allowing the addition of new impls without a breaking change due to inference?
Unless I'm mistaken this wouldn't break code because Rust doesn't have implicit casts:
fn demo(data: &[u8]) {
let a: usize = 5;
let b: u16 = 5;
data[a]; // Already allowed because the type is known (and usize)
data[5]; // Allowed because the attribute indicates which one to use by default
data[b]; // Allowed because there is a u16 impl
data[5u16]; // Allowed because there is a u16 impl
}
That probably doesn't solve all problems though (and you'd have to know which ones break inference in the first place to know where the attribute is needed).