Blog post: intersection impls

So I wrote a blog post about “intersection impls”, which are often called “lattice impls”. It’s the first in a small series I hope to do over this week just jotting down some thoughts I’ve had on the state of specialization and how we might extend it. I’ve opened this thread as a place for comments and discussion.

9 Likes

Intersection impls is a much easier term to grok than “lattice specialization.” This is a feature I want fairly often. In one of my projects, I wrote up an example of how it could significantly improve the API.

From the example, my suspicion is that one of the additional mechanisms is some annotation to explicitly declare an ordering between two overlapping traits (so that e.g. T: Display + Debug will fall to the T: Display impl if no intersection impl exists).

But here’s another example of unresolvable intersection that you didn’t cover:

impl<F> Whatever for F where F: FnOnce() -> i32 { }

impl<F> Whatever for F where F: FnOnce() -> io::Result<i32> { }

These currently conflict, but its obviously impossible to define their intersection, because there is no type which represents the intersection of two concrete types. The reality is that they don’t intersect, the compiler just isn’t able to reason about mutual exclusion from conflicting associated types.

And going beyond that, declaring two traits as mutually exclusive and the intersection of their constraints as incoherent is my pet feature. :wink:

1 Like

Great post as always! Just some nits:

  • The right sides of the boxes in the fixed-width font diagrams aren’t lining up for me (Firefox 49, Ubuntu)
  • In a few places the return type for fn clone when implemented for Option<T> is written as T
  • s/precict/predict

(This is Issues · rust-lang/rust · GitHub)

Yeah & I wrote a somewhat slapdash RFC about it: https://github.com/rust-lang/rfcs/pull/1672

A comment on reddit showed what seems to me to be a perfectly plausible way to resolve the intersection that the blog post says can’t be resolved:

impl<D: Display> RichDisplay for D { ... }
impl<T: RichDisplay> RichDisplay for Widget<T> { ... }
impl<T: RichDisplay> RichDisplay for Widget<T> where Widget<T>: Display { ... }

In this context, it seems like a shorthand for the intersecting impl could be to write the Widget impl like:

impl<T: RichDisplay> RichDisplay for Widget<T> where Widget<T>: ?Display { ... }

By writing it like this, you are formally acknowledging the potential overlap with the impl for T: Display and opting to use the Widget impl in the overlap. The complementary syntax could be:

impl<T: RichDisplay> RichDisplay for Widget<T> where Widget<T>: !Display { ... }

In this case, you’re formally acknowledging the potential overlap and saying you want to default to the T: Display impl in the overlap. Note that the normal problems with negative reasoning wouldn’t apply here, since the !Display bound would only be valid as a result of the T: Display impl, so a trait is never implemented based on negative reasoning.

2 Likes

Heh, it's kind of funny. I was writing that last section with the goal of getting to this example (that is, a counterfactual where clause) -- but I stopped early because it seemed sort of too silly. I am not keen on the idea that one should write an impl for a case that doesn't exist today.

I admit though that I can't marshall the perfect counterargument. It just feels... quite surprising to me, and likely to be annoying in practice. For one thing, it seems like a temptation to write a half-baked impl, since you know it'll never execute. Moreover, if you don't know how Display is implemented, then it's going to be difficult to know whether it's the behavior you would want or not -- I think almost certainly the thing you will want to do is actually to use your own custom RichDisplay impl. You can of course make both impls do the same thing but it's kind of annoying to do so.

Another reason that I don't like is that, in the event that someone does implement Display for Widget<T>, suddenly your "counterfactual" impl might become identical to the other one. In this case, imagine if someone added impl<T: RichDisplay> Display for Widget<T> (I know they would be unlikely to do so, but what if they did?). Now suddenly the last two impls cannot be differentiated.

In writing that up, though, I realized that you can get into similar situations without intersection impls. Consider the example in this gist. Same problem arises there without intersection impls, no?

In general we have said that adding blanket impls of this kind (all things that are Foo are Baz) is not necessarily backwards compatible, but certainly this area possesses many minefields.

Interesting! I believe that it might be annoying, but I don't find it surprising at all. To me it seems intuitive and consistent with the 'openness' of the trait system that my specializations need to account for arbitrary eventualities, even if they aren't possible given the current code available.

I think between this and the conversation we had at RustConf about auto traits, we are working from very different mental models of how constraints behave, and I wonder which predominates among Rust users.

Here's another, sort of arbitrary example: I find it very surprising that this function definition isn't well typed, but I suspect you expect there to be a type error:

fn foo() where String: Into<u32> { }

My intuition is that where clauses are evaluated at the call site. You can only call this function if String: Into<u32>, but I can define even though that isn't true. This isn't how it works, and it fumbles my mental model a bit, even though of course I would never want to use such a concretely counterfactual where clauses.

Another reason that I don't like is that, in the event that someone does implement Display for Widget<T>, suddenly your "counterfactual" impl might become identical to the other one. In this case, imagine if someone added impl<T: RichDisplay> Display for Widget<T> (I know they would be unlikely to do so, but what if they did?). Now suddenly the last two impls cannot be differentiated.

Since we were talking about non-local types, surely the orphan rules prevent this? This would imply a circular dependency between these two libraries.

In general we have said that adding blanket impls of this kind (all things that are Foo are Baz) is not necessarily backwards compatible, but certainly this area possesses many minefields.

That at least solves the problem we were looking at on the negative trait bounds RFC! :slight_smile:

2 Likes

I totally see where you are coming from. I agree it's consistent (modulo the challenges below). It still seems to me like not the ideal setup somehow. Each step that we took to got here was consistent but I'm not totally happy with the place it is leading us. But now I'm leading into the next blog post or two that I wanted to write -- and I have to go get ready for the day, so I'll leave it there. :wink:

Well, I think your mental model is accurate; and if there are generic type parameters involved you can certainly create functions that could never be called with any actual type. But there's an additional twist, which is that types and predicates and so forth have to be "well-formed" and I think this error falls out of that checking, though I'm not 100% convinced it ought to.

We've actually gone back and forth on errors like these. One complication is that, if such a function is not generic, then we are supposed to generate code for it. But if there are where-clauses that cannot be satisfied, we cannot generate code for it (i.e., it may call functions that don't exist). We could certainly just generate a panic or some such thing (and we do similar cases in objects).

(One issue is that I think we need to do a large overhaul of our trait matching machinery in the compiler. I've had a rough plan in mind for some time but no time to try and elaborate on it into something more real. The current way that we handle caching and tracking what is in scope makes dealing with where clauses like String: Into<u32>, which contains no type variables, rather troublesome.)

Well, in nrc's original scenario, the impls looked somewhat different, and there the orphan rules would not come into play:

trait Scannable {}
impl<T: FromStr> Scannable for T {}
impl<T: FromStr> Scannable for Result<T, ()> {}

So he would have to add:

impl<T: FromStr> Scannable for Result<T, ()> where Result<T, ()>: FromStr {}

But now it's totally plausible that the original crate implements FromStr for Result<T, ()>.

So last night when I wrote that comment I was worried because that was exactly the scenario I'm trying to enable -- but actually it's not quite, though interconverting between arbitrary traits is desired (in particular, it'd be nice to have impl<T: Display> Debug for T). But other than that, the cases I am most interested in all have a subtrait relationship, which makes this particular case of negative reasoning a non-issue.

1 Like

Another post: Distinguishing reuse from override · baby steps

Today I want to dive a bit deeper into specialization. We’ll see that specialization actually couples together two things: refinement of behavior and reuse of code. This is no accident, and its normally a natural thing to do, but I’ll show that, in order to enable the kinds of blanket impls I want, it’s important to be able to tease those apart somewhat.

This post doesn’t really propose anything. Instead it merely explores some of the implications of having specialization rules that are not based purely on "subsets of types", but instead go into other areas.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.