Today, if type has only one implementation of a generic trait, the compiler is able to infer the generic type parameter. Unfortunately, relying on this can result in a fragile cod, since adding a new impl (which is considered semver compatible) anywhere in the dependency tree would break such code. As the recent time debacle has shown, such breakage could be really widespread.
And this problem does not apply only to trait impl in std. For example, see this issue. A simple update of hash crate (e.g. sha2) somewhere deep in your dependency tree can break you code which relies on impl AsRef<&[T]> for [T; N] being the only AsRef impl for arrays. In other words, the following seemingly innocent function could get broken by a simple cargo update!
pub fn f(a: &[u8; 16]) {
let b = a.as_ref();
println!("{b:?}");
}
Note this code successfully passes all existing Clippy lints (i.e. clippy::all). An experienced Rust developer could notice the fragility and fix it (e.g. by adding an explicit type annotation b: &[u8]), but it's far from obvious and it's sufficient to have just one crate in your dependency tree to trigger this fragility.
There are different proposals of how to properly fix this issue (e.g. by introducing a special attribute for selecting the "default" impl for type inference), but I think the suggested warning could be useful in addition to such fix.
The closest warning under development is this one, but it's much smaller in scope.
Type inference is useful when editing source code, but not when just consuming code of dependencies.
To me such clippy lint (with an auto fix) would be very interesting as a pre-publish preprocessing. It'd be good to have types completely nailed down in code uploaded to crates.io.
Yes, it would. In the near term it can be worked around with an exception for integer literals, while, hopefully, in the long term a proper solution for impl-based inference will be introduced which would finally unblock Index impls for other integer types.
I think lint against all inference takes it a bit too far. The main problem is not type inference per se, but "spooky action at a distance", i.e. the possibility to break code far away from introduced change while technically being semver-compatible.
I would love to see this as a rustc lint, and ideally grouped under a family of lints for "I want to optimize for the robustness of my library for its users". The kind of lint family that a top-1000 library might want to turn on, so that even what's typically classified as "acceptable" (such as inference failure) gets a warning or error so you can design your code to be resilient to it.
There are also similar issues with other kinds of "minor changes" per RFC1105:
Adding or removing private fields in a struct when at least one already exists.
Except if that turns the struct from a 1-ZST into a non 1-ZST then it can be fragile. That property is observable by having an attribute listed in the fields of a repr(transparent) type. Playground. Big libraries may want to lint on any non-explicit, non-trivial 1-ZST. It's not clear what we should regard as trivial since inner: () is also a somewhat common patter to make a type private but does not explicit intend a 1-ZST. One fix might be to bump the alignment to 2 but I think it'd be even better if we had an explicit attribute to mark a type that's recognized by the compiler, that can be used and could include recognizing a repr(transparent) with another inner 1-ZST. I can't think of a way to guard against this breakage on the consumer side—except maybe clippy recognizing situations where you could use a PhantomData<T> instead of T for such attributes but I've never come across that in actual code.
Adding any item to an inherent impl.
If such a function item has a name that's commonly used from a trait then this will change semantics, and will cause type inference difference (or just failure) unless the signature are the same. For big libraries they may want to have cargo/clippy check against overlapping trait and method names?
Loosening bounds on an existing type parameter.
Can also cause the kind of inference failure of adding new impl candidates, but for deducing the type variables of a generic type implementing the stronger bound only for a single type argument but the latter for many. I'm not entirely sure why inference needs the generic type named even if there is no other candidate at all, the top of my head: Rust Playground
Generalizing a parameter or the return type of an existing function by replacing the type by a new type parameter that can be instantiated to the previous type.
Of course fails if (most) consumers rely on auto-(de)ref to call, in which case the change will cause type inference failure since the generic (non-self) parameter will disable auto-deref. I think this is just the RFC being wrong to declare this minor to be honest, except if we got the ability to default function generics. Fixing that on the big-1000 consumer side would entail explicit type ascriptions to about _every_external function call with an exact match to their current signature.