Shipping specialization: a story of soundness

Rust’s impl specialization is a major language feature that appeared after Rust 1.0, but has yet to be stabilized, despite strong demand.

Historically, there have been three big blockers to stabilization:

  • The interplay between specialization rules and coherence, which I resovled in an earlier blog post.

  • The precise ways in which specialization employs negative reasoning, which will be resolved by incorporating ideas from Chalk into the compiler.

  • The soundness of specialization’s interactions with lifetimes. The RFC talks about this issue and proposes a way to address it, but it has never been implemented, and early attempts to implement it in Chalk have revealed serious problems.

I’ve been wrestling, together with nmatsakis, withoutboats and others, with these soundness issues.

Spoiler alert: we have not fully solved them yet. But we see a viable way to ship a sound, useful subset of specialization in the meantime. Feel free to jump to “A modest proposal” in the post if you just want to hear about that.

This blog post is an attempt to write up what we’ve learned so far, with the hopes that it will clarify that thinking, and maybe open the door to you cracking the nut!

http://aturon.github.io/blog/2017/07/08/lifetime-dispatch/

15 Likes

cc @nikomatsakis, @withoutboats, @eddyb, @arielb1, @ralfjung, @sgrif

Nice writeup! Also, the approach looks totally sound to me; it's the most conservative one I can think of.

The problem is that type check can no longer tell trans to use the specialized impl in the call to Assoc::default, but it is still assuming that the specialized impl is used externally (i.e., in the build function).

Oh wow, that's a subtle one (and we missed it at the all-hands, didn't we?). Good catch.

Doesn't this example show that not having any lifetime-based trans-blowup cannot possibly be sound? Either indirect::<&'a i32, &'b i32> (for 'a != 'b) is translated separately from indirect::<&'a i32, &'a i32> (resulting in blowup), or they both are translated the same way (resulting in typeck making wrong assumptions about the return type of this function).

Fascinating post as always. This is exactly the sort of complicated but important issue I love reading about.

However, in these cases, we trigger an error-by-default lint to warn that a possibly-applicable specialization is not being used. (This needs to be a lint, not a hard error, because the relevant impls aren’t always under your crate’s control.)

Is that supposed to say "warn-by-default", or are error-by-default lints that somehow aren't hard errors an actual thing I didn't know about?

The lint would just have the deny level by default, i.e. same behavior as if everyone had #![deny(lint_name)] in their crates, but if you use warn(lint_name), or even allow(lint_name), it would get relaxed.

There is a stricter level than deny, called forbid, and that one cannot be relaxed, i.e. you can put #![forbid(unsafe_code)] and nothing in the crate can allow unsafe code for just a module/function/etc., whereas with #![deny(unsafe_code)] it’s possible.

2 Likes

It seems to me that the root cause of the issue is that in some cases type check and trans make different choices because they have different inputs. The obvious solution is that type check tells trans what the decisions are and it does not try to come up with new stuff. Could you please tell what the issues are with that solution in more details? Also, the way I see it, lifetimes are purely compile time constructs. I don’t understand why trans need them (or even see them, at all). Type check should provide fully typed functions to trans. My opinion is that it is probably worth discussing this direction, even if this is the more complicated solution. I’d prefer a general redesign of this area in the compiler over complicated set of rules.

1 Like

This is what the post gets into around monomorphization. To perfectly dispatch specializations you need the concrete type, including lifetime, typeck is non-monomorphic, so it might still have type variables, whereas trans is monomorphic but all lifetime variables have been erased.

At least with the current version of specialization, typeck shouldn't really care which impl gets selected - it can only see the specialized associated types as abstract types and access them through their bounds. The problem is that if typeck "sees an associated type as an abstract type", it relies on it being always the same abstract type.

It's even worse than that - you can make the lifetimes late-bound and prevent specific translation:

fn indirect2<'a, 'b>(_x: PhantomData<*mut (&'a (), &'b ())>) {
    indirect::<(&'a i32, &'b i32)>()
}

Example (T) for the problem with lifetime-monomorphizing strategies.

#![feature(specialization)]


trait TryCoerce<T> {
    fn try_coerce(self) -> Option<T>;
}

impl<U, V> TryCoerce<U> for V {
    default fn try_coerce(self) -> Option<U> { None }
}

impl<U> TryCoerce<U> for U {
    fn try_coerce(self) -> Option<U> { Some(self) }
}

// type parameters needed to work around smart-ass type checking.
fn coerce_lifetime_into<'a, 'b, U, V>(x: &'a U, y: &mut Option<&'b V>) {
    *y = <&'a U as TryCoerce<&'b V>>::try_coerce(x);
}

// This is a single function pointer. We can pass it around to C code. If
// we want "reliable" specialization, it must somehow tell whether the input
// and output lifetimes are equal
pub static COERCE_LIFETIME_INTO: for<'a, 'b> fn(&'a u32, &mut Option<&'b u32>)
    = coerce_lifetime_into;
    
fn main() {}

I don’t follow the first example in Going deeper.

The following is valid in current rust:

trait Special {
    fn special(&self);
}

impl<T> Special for (T, T) {
    fn special(&self) {
        println!("specialized");
    }
}

fn pair<'a, 'b, T>(a: &'a T, b: &'b T) {
    (a, b).special()
}

fn main() {
    pair(&"hi", &"there");
}

So the tuple impl can be used even if the lifetimes aren’t identical, and trans doesn’t need to know what the lifetimes are. Furthermore, given that is valid, I would find it surprising if adding a more general impl to that code would cause rust to start using it instead.

As a naive user, I expect specialization to work as follows:

  1. Find the most specific specialization without taking lifetime bounds into account.
  2. Validate that the lifetime bounds are met, and give a compile error if they aren’t.

That fits better with my mental model of how rust currently works, but maybe my mental model is flawed.

1 Like

I believe the first example in the "Going deeper" section is not a place where type check and trans actually disagree, but a place where they theoretically could disagree depending on how we decide specialization should work and how many layers of indirection are involved.

However, type check could deduce that the more specialized impl always applies when invoking special in pair, and you could imagine communicating that information down to trans.

I think the reason he was bringing this up at all was to demonstrate that the soundness issues described in the rest of the post are not necessarily unique to associated types, and that's why a truly complete solution can't focus purely on associated types but must deal with the type checker vs trans more generally.

As a naive user, I expect specialization to work as follows:

  1. Find the most specific specialization without taking lifetime bounds into account.
  2. Validate that the lifetime bounds are met, and give a compile error if they aren't.

I basically agree with this. I suspect there might be some legitimate exceptions like having a default impl for &str and a specialization for &'static str, or having a default impl with two unconstrained lifetimes and a specialization for when the lifetimes are the same or one is known to outlive the other. I can't come up with a concrete use case for doing either of those things myself, but I doubt it's safe to assume there aren't any.


My only concern with the proposed subset for stabilization is that the post sounds like it's proposing we stabilize at least some cases where the behavior is expected to require changing in the future, and get away with that via lints and epochs. As much as I want epochs, for a feature like this that's part of the core trait system and has massive ramifications for optimally efficient composability across the ecosystem, I don't like the idea of relying on epochs to "get away" with stabilizing unstable behavior...assuming that's a remotely accurate summary of the proposal. It also wasn't clear to me from the blog post whether there is any alternative in between "stabilize some stuff we'll have to epoch away later" and "never stabilize anything until we have a perfect solution".

Could you go into more detail on why "lifetime specialization" can't be a hard error? The post only mentions in passing that:

(This needs to be a lint, not a hard error, because the relevant impls aren’t always under your crate’s control.)

But I don't see how not owning all the impls means it can't be as hard of an error as, say, the orphan rules.

The correct solution seems to be to not erase lifetimes and pass them lifetimes to monomorphization, perhaps taking monomorphization out of trans and doing it as a separate MIR pass before it.

The only thing that matters however is whether lifetimes are 'static, and how they compare to each other, which means that they should just be abstract numbers without region information along with a partial order between them, and monomorphization needs to normalize them so that the first lifetime that appears in the signature being monomorphized is renamed to “0”, the first different one to “1” and so on, and the result augmented with the ordering information used as the key for monomorphization (and symbol mangling).

This basic normalization should already reduce code generation explosion, but an extra pass could be added before monomorphization that determines which lifetime comparisons specialization could depend on, allowing to do a more powerful normalization taking them in account.

Then lifetime assignment rules need to made more precise, probably defined to be such that a lifetime is assigned to be, if possible, equal to the largest other lifetime it can be equal to (so if it can be 'static it should be). Or, with a global view, lifetimes are assigned so that the number of distinct lifetimes including 'static is minimized. Not totally sure if these criterias are correct though.

Of course this is assuming that lifetime specialization is a desirable feature in the first place, which for instance means being able to have a function that takes &'a T and turns it into a Cow, either assigning to the Borrowed variant or cloning and assigning to the Owned variant depending on 'a.

2 Likes

You're squeaking by based on lifetime variance here. It's trivial to construct an example where variance doesn't apply:

trait Special {
    fn special(&mut self);
}

impl<T> Special for (T, T) {
    fn special(&mut self) {
        println!("specialized");
    }
}

fn pair<'a, 'b, T>(ts: &mut (&'a T, &'b T)) {
    ts.special();
}

fn main() {
    pair(&mut (&"hi", &"there"));
}

Playground

Mutability forces the lifetimes to be invariant, so now the impl doesn't apply unless the lifetimes are identical.

The comment on monomorphization and lifetime erasure is interesting in that fundamentally the problem is that at no point during compilation do we have a “completely typed” program, with all of both life and type parameters resolved. If we had such a pass we could, with a small hole*, just allow lifetime based specialization. It would be a terrible UX to actually try to depend on it, and we’d probably want to warn when you have those impls and leave it out of our stability guarantee. But it would be sound.

*higher rank function pointers, which would just have to perform no lifetime based specialization for the parameters they’re higher rank in.


I’m worried that if we stabilize specialization without a story for assoc_specialization, we will be boxed in for our options for assoc_specialization, because some choices would involve changing the specialization order of lifetime-based specializations.

3 Likes

@aturon: a very simple and naive question from someone who’s interested but not very likely to crack the nut: are there any known examples of soundness issues due to interactions between specialization and lifetimes that do not involve 'static ? From the various explanations I’ve read so far it looks like 'static is the only value that creates problems (and as far as I know it is the only litteral lifetime in Rust)

@burakumin here is one that does not involve 'static.

#![feature(specialization)]

// Convert &'a T to Self
trait FromRef<'a, T: ?Sized> {
    fn from_ref(r: &'a T) -> Self;
}

// &'a T can be converted to &'a T
// But with specialization this impl incorrectly converts &'a T to &'any T
impl<'a, T: ?Sized> FromRef<'a, T> for &'a T {
    fn from_ref(r: &'a T) -> Self {
        r
    }
}

// &'a T cannot be converted to any other type
impl<'a, T: ?Sized, R> FromRef<'a, T> for R {
    default fn from_ref(_: &'a T) -> Self {
        unimplemented!()
    }
}

fn extend_lifetime<'any, T: ?Sized>(data: &T) -> &'any T {
    fn helper<T: ?Sized, R>(data: &T) -> R {
        // If R is &'any T then convert &T to &'any T
        R::from_ref(data)
    }
    // Instantiate helper::<T, &'any T>
    helper(data)
}

fn main() {
    let unsound = {
        let s = "specialization".to_owned();
        extend_lifetime(s.as_str())
        // s is dropped
    };
    // reference is still ok
    println!("{:?}", unsound);
}
1 Like

Anything which constrains any lifetimes is sufficient. Using a literal lifetime (as you note, 'static is the only one), but as in @dtolnay’s example, having the same lifetime appear twice in the signature of the impl constrains those two appearances to be equal to one another.

Some possibly relevant literature regarding the conflict between associated types and overlap / specialization from a Haskell perspective: Type Equalities, Disequalities and obsoleting of Overlapping Instances

If a type class includes associated data or types, overlapping is unsound -- as in segmentation-fault unsound.

I guess open world + negative reasoning makes a rather nasty combination. Something probably has to give, but I do hope it's not lifetime parametricity.

1 Like

The orphan rules are our protection against that problem in Haskell: given a fully concrete constraint (e.g. i32: Clone), no future crate can add a new impl for that constraint. This is part of what’s great about the orphan rules.

I have noticed though that there’s a symmetry with this problem and that problem in Haskell, which is that we don’t have complete information about a constraint at any 1 point in time.

I'm not sure. With our current type-system, the "only" issue is that reliable lifetime specialization destroys lifetime parametricity, and therefore our compilation strategy and ABI. A Rust interpreter could implement lifetime specialization, including associated types, fully reliably.

However,with NLL and its "conditional outlives relationships", it might not be actually be possible to know how lifetimes are related before code that depends on them is run.