About the backwards compatibility of adding trait implementations


A few weeks back, I wanted to add a char to a String. After finding out that that is not currently possible, I tried to find out why and found an issue about it¹. It was closed with the comment

Seems like something we shouldn’t track, but would probably accept a PR for.

So I forked the rust repo, read the contributing guide and made sure building it without any modifications worked fine, then added two impls: impl Add<char> for String and impl AddAssign<char> for String. After that though, as you can probably guess from the title, rustc did not build anymore. Some code in an external crate resulted in a compiler error when trying to add a Cow<str> to a String. After some debugging, I was able to figure out what had gone wrong: Previously, there only was a single impl Add<_> for String (the one for &str), and the compiler allowed deref coercion to happen from Cow<str> to &str. However, once there was a second impl Add<_> for String, the compiler didn’t infer a deref coercion anymore, and the code trying to add a Cow<str> to a String failed to compile.

I have created a small repository² that showcases this bug, and while creating it, came up with another case where adding a trait implementation would result in a compiler error in code using the trait, this time without deref.

1: https://github.com/rust-lang/rust/issues/38784
2: https://github.com/jplatte/rust-trait-impl-compatiblity



This sounds like a typical case of “expected inference breakage” to me. What makes you think it’s a bug?



Ixrec is right that this is an inference failure, which we normally we have an allowance for, though inference failures in the operators are particularly troublesome becaus the conversion to the “explicit form” is pretty bad.



@Ixrec: I just thought adding trait implementations was supposed to preserve backwards compatibility. I can’t say that I know that this is written down somewhere, maybe this is a baseless assumption. But it seems very easy to accidentally break backwards compatibility if it is not the case. E.g. if the String + Cow<str> case wasn’t done in a dependency of the compiler or one of the tools that are built with it, and I had done a PR to add these impls, would somebody have caught this? I doubt it.

I understand that this is an inference breakage. I first thought that the compiler was being too clever by infering the type when there only was a single impl though. I suppose in the deref case the compiler could just do a better job at infering a deref coercion after the second impl is added.

Also, what about the generic literal case? Is this behaviour expected? If yes, again I’d argue that it makes accidental breaking changes very easy to do and hard to catch, if no then what is the desired behaviour? IMHO it doesn’t make sense to call either Bar impl when both exist in my example. Which would then mean that the compiler should always complain about calling the generic function with an {integer} whose type can’t be infered from anything else.


closed #5

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.