Pre-Pre-RFC: Give trait methods a priority for resolving name clashes

There's a discussion how to stabilize trait methods which clash with 3rd-party crates. A recent example is Iterator::intersperse, which can't be stabilized because it clashes with itertools::Itertools::intersperse, therefore stabilizing it would break a lot of crates.

My idea is to add a #[lower_priority] attribute for decreasing the "priority" with which trait items are resolved:

pub trait Iterator {
    fn intersperse() {...}

When a method name is ambiguous, Rust chooses the method with the highest priority. Currently, trait methods have a lower priority than inherent methods. With the proposed #[lower_priority] attribute, the priority can be decreased even further. This attribute can be added either to a trait method or to a trait as a whole.

With this attribute, standard library traits can be extended without having to worry about breaking third-party crates -- unless third-party crates also use the #[lower_priority] attribute. This can be prevented by making the attribute perma-unstable, effectively allowing it only in the standard library and on Nightly. Another solution might be to allow assigning a priority as an integer:

#[priority(0)]  // default priority of trait methods
#[priority(-1)] // lower priority
#[priority(-2)] // even lower priority

I would however recommend to disallow positive values: If people were able to increase their traits' priority, they might be tempted to do so proactively on all traits to prevent breakage in the future. I don't want to encourage this. (This reminds me of !important in CSS (which is strongly discouraged) (though it's not a perfect analogy since CSS attributes have inheritance))

Another partial solution is the supertrait item shadowing RFC, but that RFC seems to only address the use cases with generic bounds:

use itertools::Itertools;

fn foo<I: Itertools<Item = i32>>(iter: I) {
    for _ in iter.intersperse(0) {}

but not the use cases without generics:

use itertools::Itertools;

fn foo() {
    for _ in (1..5).intersperse(0) {}

To be clear what I hope to achieve with this proposal: I want all additions to standard library traits to be 100% backwards compatible.


A sneaky option is to apply glob/prelude priority here as well.

If you use a new type Iterator, that shadows the prelude Iterator. It makes some amount of sense to extend this to trait methods; if you use Itertools, its methods could be preferred over Iterator's because Itertools is explicitly imported and Iterator is only used through the prelude.

This doesn't solve the issue for types not in the prelude, but Itertools -> Iterator is also the majority cause of such resolution breakage.


Another potential direction to approach this:

Right now the only way to say "look, I want the itertools one" is to drop all the way to UFCS, which is enough of an ergonomic hit that I don't blame people for not wanting to do it.

Imagine, though, that you could (assuming use itertools::Itertools; is in-scope) just change

-    foo.intersperse(bar)
+    foo.Itertools::intersperse(bar)

That's the kind of change I'd be far more willing to just do, especially if the compiler could cargo fix it for me while the other candidate is unstable.

And that solves it for other things too where neither is "lower" priority -- like foo.Debug::fmt(...) vs foo.Display::fmt(...).

(I'm highly skeptical of priorities that need to be coordinated cross-crates. Especially when the intersperse could be coordinated with cfg(accessible), should that ever start to exist.)


I like this syntax, but it doesn't really solve the problem because crates still need to modify their code to make it compile once intersperse is stabilized.

cargo fix doesn't help if the name clash is in a dependency (which might be unmaintained, or the maintainer might be on vacation).

Furthermore, for clippy fix to work in this situation, there must be some heuristic for deciding which trait to use. It could use a hard-coded list of methods/traits, or it could take take into account whether a trait is in the default prelude, whether the method is unstable, was stabilized only recently, etc. I don't particularly like any of these options.


This makes a lot of sense. I considered de-prioritizing prelude items, but I think that there are situations where lower priority is desired for traits that aren't in the default prelude, for example Error. By adding the #[lower_priority] attribute to all traits in the default prelude, we get almost* the same effect as with prelude priority. (The only difference being: if a trait from the default prelude is explicitly imported (e.g. use std::iter::Iterator;), it still has lower priority)

Regarding glob priority, this is a very interesting idea because it enables prelude priority for custom preludes such as bevy::prelude, which are usually used with a glob import. I still prefer my proposal, but I'm willing to change my mind if you convince me that glob/prelude priority is better :slight_smile:

I'm just going to disagree here. It's yet another piece of syntax, re-purposed. I don't recognize why/how it's better than UFCS.

On an unrelated note, I think this kind of priority juggling would lead to the same kind of readability nightmare that custom operators with custom precedences cause. Only that in this case, the type system is even less likely to catch the mistake, especially in the case of widespread and essentially single-signature methods like as_ref().

We should definitely not apply this without a big warning; ideally, it should be opt-in. Otherwise, if random methods of random traits keep popping up in my code, how am I supposed to write reliably, when I can't even know which methods I am invoking compared to which ones I intended to?


as_ref is only used by the AsRef trait as far as I'm aware. The priority is only relevant in case of name clashes of trait methods, inherent methods are not affected by this.

Well, it's such a general name that I don't believe it's unique ecosystem-wide. I'm actually pretty sure we could find many traits on featuring an as_ref() method. But it doesn't have to be this specific example. Names are in general not unique, even if you consider traits only and not inherent methods. So this definitely is a real issue.

Then these traits probably aren't implemented for types that also implement AsRef, because that would currently cause name clashes, since AsRef is in the default prelude.

The point is less about the specific syntax proposal and more about having a way that still has autoref and doesn't interrupt chaining.

Using as a random example, this diff is tolerable to disambiguate:

-        .map(|&u| {
+        .Whatever::map(|&u| {
             ty::Const::from_scalar(tcx, Scalar::from_uint(u, size), switch_ty)

Whereas the current way of disambiguating changes every single line, following rustfmt's layout:

-    targets
-        .values
-        .iter()
-        .map(|&u| {
-            //+        .Whatever::map(|&u| {
-            ty::Const::from_scalar(tcx, Scalar::from_uint(u, size), switch_ty)
-                .to_string()
-                .into()
-        })
-        .chain(iter::once("otherwise".into()))
-        .collect()
+    Whatever::map(targets.values.iter(), |&u| {
+        ty::Const::from_scalar(tcx, Scalar::from_uint(u, size), switch_ty)
+            .to_string()
+            .into()
+    })
+    .chain(iter::once("otherwise".into()))
+    .collect()

(Now, some are the same if you ignore whitespace, but it's still a pretty drastic change. And it'd be worse if there was more going on before the map.)


I get that. However, it is syntactic sugar for something that can be done today.

I'd argue that this is indicative of the size of the snippet. More code equals more dependencies between its parts. Couldn't this be solved more cleanly by pulling the code out into a function with the appropriate imports, so that there is no ambiguity in the first place?

My ultimate point is: we can keep adding a myriad of minor features for basically every single case where code looks uglier than we would ideally like, but that's the wrong approach. I've been using Rust for 5 or 6 years for a variety of applications, including backend development, numerics-heavy machine learning, and compilers, and I never had the feeling that 1. this kind of ambiguity gets in the way of writing beautiful code, or 2. there is a need for bending the core language in order to fit every little detail of it to my æsthetic preferences.


Two questions:

  • How do we handle it when multiple traits evaluate to the same priority level? Throw a compiler error?
  • What happens when a trait has a name that is disambiguated as given, but whose behavior is different between the two traits?

To clarify the second point, imagine that (for some reason), the standard library's version of intersperse chose to add elements before and after the given range, so that for something like (0..3).intersperse(8) the standard library returned 8, 0, 8, 1, 8, 2, 8, while the itertools version continued to produce 0, 8, 1, 8, 2. With the compiler error, you're currently forced to choose the particular version of the trait you want to use, but with priority, you might not get a compiler error. All you know is that when you import itertools::Itertools you get the the 'right' behavior, but with std::Iterator, you get the wrong behavior. I can see this leading to a lot of erroneous bug reports against both rust and itertools. With the current behavior, I'm guaranteed to have a compiler error, which will force me to make a proper choice.

As an alternative, you might want to do something like use std::iterator::Iterator::intersperse as std_intersperse (syntax made up on the spot, bike shed it as you wish). This solves the problem of the name being super long when using UFCS, while giving you a clue about where the name came from. It doesn't 100% solve the problem as the name is different, but at least it's shorter and more legible than full UFCS.


When two trait methods with the same name have the same priority, rustc will produce an error. This is no different than today.

Rustc does not look at the behaviour of functions while resolving names. If Iterator::intersperse has a lower priority than Itertools::intersperse, then the latter will be used when both are in scope. I think this is ok because bringing Itertools into scope likely means that you mean to use it.

What happens if one of your dependencies brings it into scope, but you don't do so directly yourself? This could lead to some confusing debugging issues.

A dependency can't bring something into the scope of your crate. You must import all traits you want to use, for example:

use itertools::Itertools;

A dependency could export Itertools in its prelude, which would bring it into scope when you do a glob import such as

use foo::prelude::*;

I don't use glob imports from crates I haven't written myself, but this might still convince me that we need glob priority (which means that glob-imported traits get a lower priority).

But I might want it for Itertools::new_fangled_op, but still want Iterator::intersperse. How am I to know that there's even a lower priority method that I need to use UFCS to access without the compiler telling me I have a problem?


That is also an issue. I think the best solution would be a lint for calling an ambiguous trait method. That lint would show a warning even when the traits have a different priority. This would still solve the compatibility problem (since a warning is not a breaking change), but it would also warn you of a possible bug.

While I agree that not using glob imports is a best practice, using them is a very common practice, which means that this will still be an issue.

This is starting to have a bad code smell to me. It has a lot of potential issues, especially for new users that might not know that there is a clippy lint for it (unless you're planning on putting it into rustc itself), who will be complaining about how something is broken when it's working as intended. I appreciate the thought that went into this proposal, but my personal feeling is that it should be rejected due to the confusion it will cause.

1 Like

Yes, the idea was to add the lint to rustc.

1 Like

OK, so the idea is to downgrade a compiler error to a warning in most cases, but sometimes upgrading to a hard error (when the priorities don't work out right), correct?

This really feels like a bad code smell to me. I understand what you're trying to accomplish, and I respect it, but I would much rather force people to fix their code from the outset, rather than cover for them. The reason is that while the priorities may cover the code today, someone may add a dependency to the code base in the future that exposes some weirdness in the priorities that the compiler can't handle, turning all of the warnings into hard errors. A that point, all the technical debt that the code base was building up (because not all programmers actually fix warnings when they come up) is suddenly in the forefront and needs to be fixed before any further progress can be made. Given that the person/people/team that created the mess may not be around when it all hits the fan, the person that is suddenly tasked with fixing the problem may have a very difficult time deciding which implementation is the correct one to use. The advantage of the current behavior is that if there is a conflict, then you have to deal with it right now. Your code won't compile, and you can't ignore it. That forces the person making the change (who has the best idea of what needs to be done) to handle the conflict. Priorities seem to be a way of kicking the can down the road, rather than actually solving the underlying problems.