Blog post: Supporting Blanket Impls in Specialization


New blog post on specialization, third in the series that I’ve been working on. Teaser excerpt:

In this post, I want to extend this idea to talk about a new rule for specialization that allow overriding in more cases. These rules are a big enabler for specialization, allowing it to accommodate many use cases that we couldn’t handle before. In particular, they enable us to add blanket impls like impl<T: Copy> Clone for T in a backwards compatible fashion, though only under certain conditions.

Read the whole thing, then come back here and let me know what you think. =)

1 Like


Regarding the “sour note” about negative reasoning: If we’re adding the impl

impl<T: Display> Debug for T

why not just add also

trait Display : Debug

in the same step?

That shouldn’t be a problem because of the blanket impl or am I missing a point?



The problem is that maybe there exist in the wild crates that are relying on the fact that one can implement Display without implementing Debug (in the way that I showed), and those may be broken if we say that Display: Debug (put another way: adding a supertrait is a breaking change, even with these specialization rules; ironically, because of specialization).

Of course, at the moment, any such crates are using the unstable feature specialization to do so, so they are fair game to break. :stuck_out_tongue_winking_eye: But once specialization is stable that wouldn’t be true of course.



Ah right, I misunderstood the analogy with Copy and Clone.

trait Dump { }
trait<T: Copy> Dump for T { .. }
trait<T: Clone + Copy> Dump for T { .. }

would never have been possible in the first place, because of Copy: Clone. Never mind.



Here’s a blanket impl case that hasn’t come up in discussion: Arithmetic operator overrides with the user type on the right hand side.

For example, for a numeric matrix M, implement f32 + M<f32>, and in general T + M<T> where T: Add<Output=T>.

Problems with current Rust:

  • User type on the left hand side can be blanket implemented, but the right hand side can not
  • It becomes impossible to provide T + M<T> exactly when M<T> + T is provided; so not only is the implementation separate, the interface is separate too.

Typical code that consumes our “matrix library” generically:

fn scale_matrix<T>(m: M<T>, factor: T) -> M<T>  where T: Copy + Mul<Output=T> {
    m * factor

The simple bound on T makes the library’s M<T> + T implementation available.

No trait bounds on just T exist that make the T + M<T> operation available. It is not scalable to include T: Add<M<T>, Output=M<T>> and so on in the generic code’s bounds; it requires us to encode T’s relation to M in particular, when we really only want to work with T's properties.

Most numerical libraries will probably provide both the left hand and right hand side operations, but generic code will limit themselves to only having the matrix type on the left hand side.

(“fun fact:” ndarray uses 202 impls to implement K @ Array operations where K is a primitive type or Complex<float>.)



Would it be possible to “monkey patch” upstream traits by the following construction:

trait Important {}                                                                 
impl Important for u32;                                                            
impl <T: Debug + Important> Debug for T {                                          
    fn fmt(...) {...}                                                              

Then we’ve effectively overwritten the Debug impl for u32s.

Would this only take effect when Important is in scope?

Or else would it let you patch how u32s are printed throughout the whole binary?



If you try to compile something like that you will find that orphan rules forbid you from implementing a trait defined in another crate for a type parameter.



To my intuition, the “syntactic criteria” seems like a pretty reasonable rule.

To me, it feels like trait Foo: Bar says that all Foo types must implement Bar, whereas impl Bar for T: Foo says that all Foo types happen to implement Bar. Even though the implication is the same, Foo + Bar still feels specialized in the second case but not in the first.

Though the ability to define blanket impls of supertraits, as in the motivating Copy/Clone example, is also sort of wonky (not to say it shouldn’t be allowed!). Because it rarely matters, the compiler’s ability to ‘tie the knot’ in trait impls is always surprising to me; more than once I’ve accidentally defined a set of trait impls that were actually a self-referential cycle.

1 Like


Sort of related:

We’ve put a lot of work into trying to leave the user with a very small set of cases in which it is backward incompatible to add an impl. What about defining the cases in which it is backward incompatible to remove a bound?

Usually, it seems backward compatible to remove bounds my code no longer requires - after all, I’m just accepting more types, what could go wrong? But it is not backward compatible in several cases:

  • If we’re inferring the bound in other places (as we do for supertrait bounds, and we’ve considered doing for struct bounds), because my clients may be making use of these inferred bounds.
  • If the bounded variable is used in a function return type.
  • If the bound is used to establish a specialization order. Can that be visible only in the presence of client impls, or do the orphan rules prevent it?

Should users ever have an expectation that they can remove bounds without breaking? Is there an easy to understand rule to determine when it is safe?



I was thinking a bit more about this and realized a few interesting things. First of all, if your type is generic, you still want #[derive(Copy, Clone)]. After all, if you just do this:

struct Foo<T> { }

you will get impl<T: Copy> Copy for Foo<T> which – combined with impl<T: Copy> Clone for T – means that T: Copy => Foo<T>: Clone. This is all good, but we still probably want Foo<String>: Clone as well! For that, we need the add’l #[derive(Copy, Clone)].

Moreover, for efficiency’s sake, we probably want derive to generate 3 impls:

impl<T: Copy> Copy for Foo<T> { }
impl<T: Copy> Clone for Foo<T> {
    fn clone(&self) -> Self { ... self.t.clone() ... }
impl<T: Copy> Clone for Foo<T> {
    fn clone(&self) -> Self { *self }


A good question. I don’t think we know the answer just yet. =) In particular, I think it may change.

At present though, I believe the only time clients can make use of your bounds is if they are supertrait bounds (i.e., have the form Self: Foo in a trait, or else the shorthand trait Bar: Foo).



I’ve been wondering the last few days if it wouldn’t be a good idea for derives to receive the names of other derives as inputs, then Copy could generate Clone impls if Clone isn’t present, and so on.

Maybe contrived examples only, but:

trait Foo {
   fn new() -> Self;

trait Bar {
   fn bar(&self);

fn baz<T: Foo + Bar>() -> T {


One issue I’ve repeatedly encountered is the inability to write impls that convert between parameter types, e.g.

struct Foo<T> {
    // ...
//~ ERROR conflicting implementations of trait `std::convert::From<X<_>>` for type `X<_>`:
impl<T, U> From<Foo<U>> for Foo<T> where U: Into<T> { 
//   = note: conflicting implementation in crate `core`
    // ...

This happens because the trivial substitution U = T causes this impl to conflict with the blanket impl<T> From<T> for T in core. Of course I can write the conversion in a macro and spam transmute_foo!(my_foo) instead of using my_foo.into(), but this seems like an unnecessary hack.

Would your proposed changes fix this, @nikomatsakis? It seems to me that they would, purely by virtue of the first rule “impls with more specific types specialize other impls”. Otherwise I should start a new topic. :slight_smile:



I like how intuitive the rules discussed are.


As an aside, this – along with the similar example raised by withoutboats and reddit user oconnor663 – strongly suggests to me that traits need to “predeclare” strong relationships, like subtraits but also mutual exclusion if we ever support that, at the point when they are created.

How would one be able to “predeclare” strong relationships?

However, another possibility that aturon raised is to use a more syntactic criteria for when something is more specialized – in that case, Debug+Display would be considered more specialized than Display, even if in reality they are equivalent. This may wind up being easier to understand – and more flexible – even if it is less smart.

I actually like this, if anything because it removes the semver rule and means libraries up the chain can improve their defaults with time without breaking backwards compatibility. Anyhow, the compiler should definitely warn when this happens, since it means that one impl will never be picked anymore (it basically becomes dead code).



I am not sure! One thing that @withoutboats floated – and I haven’t had time to think about if this really addresses the problem – is that you might declare “exclusion groups” instead of fully general !Trait constraints. This would basically be a closed set of traits of which you can implement at most one. For example, maybe I divide up collection traits like so:

exclusion_group {
    trait MapLike { ... }
    trait SetLike { ... }
    trait VecLike { ... }

Here I am declaring that every type can opt into at most one collection “mode”. Now I can have impl<T: MapLike> and impl<T: SetLike>, because the system knows that these are mutually exclusive. Moving an existing trait into or out of such a group would be a breaking change.

Yes, I like the syntactic approach too. I haven’t had time since writing that post to think on it more, but it feels like it may well be just plain more predictable, which is appealing!



I’ve found a use case that badly wants this revised algorithm. It seems like a generally applicable pattern, so I thought I’d post it.

Problem statement:

Suppose you’ve got a middleware system, similar to iron. Some middleware just pass through the Request they receive to the next step in the chain, whereas others may construct additional metadata and attach it to the Request. For example, a timeout middleware won’t modify the request, whereas a cookie parsing middleware will.

Furthermore, some middleware will require that one of these mutative middleware has already been passed through. For example, an auth middleware might require that the cookies have been parsed.

Given these 3 middleware, (timeout, cookies, auth), there are multiple orderings that should be well typed:

  • Timeout -> Cookies -> Auth
  • Cookies -> Timeout -> Auth
  • Cookies -> Auth -> Timeout

The important thing is that auth come after cookies. Any other ordering should not be well typed.

Iron solves this problem using a dynamically dispatched TypeMap, but that answer is so unsatisfying! First of all, it should be a compile time error to run the auth middleware without running the cookies middleware first. But moreover, the whole problem of e.g. grabbing the cookies from the metadata should be resolvable statically if the order of middleware is statically known.


The basic idea is to have a matryushka nesting doll pattern of metadata types, which are built up as middleware are run. Each is wrapping expected to Deref to its inner middleware.

For example:

struct Request<T> {
    metadata: T,

// The initial form of a request is Request<NoMetadata>;
struct NoMetadata;

// The cookies metadata will instead pass the next layer
// a Request<CookiesMetadata<NoMetadata>>
struct CookiesMetadata<T> {
    cookies: Cookies,
    inner: T,

impl<T> Deref for CookiesMetadata<T> {
    type Target = T;

The problem this presents is how the Auth metadata will access the cookies. Trivially, this could be implementing AsRef<Cookies> for CookiesMetadata<T>, but then if another layer wraps CookiesMetadata<T> in some other type, access to the cookies is lost.

The solution to this is to use specialization to “recursively” define this trait:

trait HasCookies {
    fn cookies(&self) -> &Cookies;

// "Base case"
impl<T> HasCookies for CookiesMetadata<T> {

// "Recursive case"
impl<T> HasCookies for T
    T: Deref,
    T::Target: HasCookies

The problem is that these impls do not form an order according to the current specialization rules unless the T in the CookiesMetadata<T> impl is bound HasCookies. But of course the whole point of CookiesMetadata is that T doesn’t matter, because this is the layer that contains the cookies.

However, this change to the rules would solve this problem neatly. CookiesMetadata<T> would be more speicalized than T: Deref where T::Target: HasCookies.

Having said that I am now urgently enthusiastic for this change! Do we think it needs a whole RFC or could someone implement it through the tracking issue?

1 Like


@cristicbz has pointed out on twitter that this can be achieved with the strategy that hlist uses. This requires peano encoded indexes, which I don’t really like because they feel like quite a hack. Here’s a working example though:

use std::ops::Deref;

pub struct Request<T> {
    pub metadata: T,

pub struct NoMetadata;

enum Here { } 
struct There<X>(X);

pub trait HasCookies<X>{ }

impl<X, T> HasCookies<There<X>> for T
    T: Deref,
    T::Target: HasCookies<X>
{ }

pub struct CookieMetadata<T> {
    metadata: T,

impl<T> HasCookies<Here> for CookieMetadata<T> { } 

impl<T> Deref for CookieMetadata<T> {
    type Target = T;
    fn deref(&self) -> &T { &self.metadata }

Its interesting that this change to specialization enables specialization to replace peano numbers in this instance (probably a similar strategy could be applied to hlist, though I haven’t investigated).

The specialization strategy requires a PhantomData holding the metadata parameter in any middleware component, so that it can be monomorphized to its position in the metadata chain. The peano strategy will require an additional parameter for every additional bound it places on that metadata, so that it can be monomorphized over its distance from where that metadata was inserted.

For example:

struct AuthMiddleware<M, X>(PhantomData<M, X>);

impl<M, X> Service for AuthMiddleware<M, X>
    where M: HasCookies<X>,



I was (re)reading Foundations of Path-Dependent Types, with an interest in seeing how closely the Scala patterns outlined in that paper could be mimicked in Rust.

While implementing the Animal/Lion/Cow example, it seems that the first major stumbling block is exactly this lack of specialization for blanket impls. So simply as an additional example, and an attempt to connect our specialization thinking with what Scala has previously done, I give you: Playground link

If there’s a better way to express this concept in Rust, please let know.


closed #20

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.