Blog post: Maximally minimal specialization: always applicable impls

Hehe, when writing that I was chanting to myself “don’t get it backwards! don’t get it backwards!” So of course I did.

First off, just having the borrow checker decide 'static or not' is enough to make it arbitrary -- it picks the shortest lifetime it can, right now, which means that even if it could have picked 'static, it will try to avoid it unless forced. (This is necessary to avoid nonsense borrowck errors.)

But, secondly, in general you might need to know whether two lifetimes are equal. Consider:

impl<T> Foo for (T, T) { }

and now you have a type (&u32, &u32). Does it apply or not?

Also stuff like this:

impl<T,U> Foo<T> for U
impl<'a, 'b> Foo<&'a u32> for &'b u32 where 'a: 'b

Now if we have &u32: Foo<&u32>, does the specialization apply?

1 Like

This...took me a while to parse. But that's a fun example. =)

I agree that this is an important use case, and I certainly agree that we don't handle it all that well, and it's certainly that we had hoped to avoid "splitting" traits into "specialization predicates" and "normal traits". But at this point I don't think it's possible to avoid that split.

The fact is that impls can (and frequently do!) include tests that we just can't test after erasure. Previously we had hoped to find some solution that basically let the choices made pre- and post-erasure diverge: I suspect this is fundamentally unsound, and that is roughly the point of my post.

In any case, even if it can be made sound, it's always made me pretty nervous. It seems like it implies that figuring out which code will run is pretty non-trivial for users. I am nervous about this, particularly since I think people will use specialization for more than just optimization, but also for semantic choices.

That seems to leave three options:

  • Stop erasing.
  • Somehow make all choices at typeck time.
  • Restrict the set of specialization impls to those that can be decided equally well at typeck time and at codegen time.

We've been discussing the first one ("stop erasing") on this thread. I believe it is quite infeasible. It would also mean that further improvements like NLL are not possible, or at least much more fraught, since they would change runtime semantics. I expect us to make more such improvements. =)

Making all choices at typeck time is, in a sense a variant on the first option. It would mean that generic functions -- which often lack the context for such a choice -- must push the choice up to their caller. I'm not even sure this can be done, since often the choices will involve compound types that feature regions from within the function. But even assuming it can be made to work, it's clear that it has the same downsides in terms of the borrowck and regionck suddenly being in charge of making choices about what code executes (it probably avoids some of the monomorphization explosion, since I imagine we would basically be compiling down to a dictionary passing style, and in practice the dictionaries we resolve would wind up being the same).

So that leaves us with the third option, which is basically what I described. (Were there any other specific proposals for how to implement such a test? It'd be interesting to compare, but I don't recall anyone giving a detailed proposal.)

Am I missing some alternative?

2 Likes

Not only that: it doesn't magically deliver soundness either, because there are still cases where typeck and trans can disagree.

Just a side note: apart from TrustedLen, another example where 'static-based specialisation would be helpful is https://github.com/rust-lang/rust/pull/46993 so that one could use String::from and Vec::from on compile-time constants like any other references and API would avoid allocation itself.

@nikomatsakis I think there might be a further alternative of considering all lifetimes to be distinct and incomparable unless they must be equal because it’s the same type (or if there are some where constraints on lifetimes in scope, although this would require passing this to code generation).

It’s a bit tricky, because applying it to all trait resolution would not be backwards compatible, but it might be possible to only apply it to specialization decisions (in both typechecking and codegen, of course).

Not quite sure how expressive and desirable this would be and not even totally sure whether it can be done soundly (although it seems it can be). It might be the simplest approach if it’s feasible though.

Sure, and I think some sort of "dispatchable traits" is the sanest way to go forward - it just needs some language design in order to not be terribly ugly.

For some reason I thought we decided on !dispatchability on the RFC, but I can't find any clear comment talking about that. I should try to ascribe less meaning to lonely unclear RFC PR comments.

Please no.

There are 2 problems: the first, that the result of lifetime inference are intentionally not precisely specified. For example, with the current version of lifetime inference, the 'static impl would be pretty hard to reach, as

impl From<&'static str> for String { /* .. */ }
fn foo() {
    let t = String::from("hello");
    // ^ subtyping is possible between the literal and the function call, so the compiler can (and with today's
    // implementation, will) infer a stack-local lifetime for the "hello" literal, and this code will still allocate.
}

The second: Rust has lifetime parametricity. This means that if unsafe code passes a lifetime back to user code, e.g. through a for<'a> closure, the unsafe code gets to pick the meaning for that lifetime.

For example, the lifetime passed by Ref::map is supposed to live "as long as the ref lives":

// Ref::map
fn map<U: ?Sized, F>(orig: Ref<'b, T>, f: F) -> Ref<'b, U> where
    F: for<'r> FnOnce(&'r T) -> &'r U { /* ... */ }

fn foo<'a>(r: &'a RefCell<u32>) -> Option<Ref<'a, u32>> {
    let x = 0;
    let ref_ = r.borrow();
    let ref_ = ref_.map(|inner| {
        // `inner` is an `&u32` that lives as long as the `Ref` lives.

        // If the `Ref` is dropped within `foo`, it lives shorter than `x`,
        // and if it isn't, then it lives longer than `x`, and we have no
        // way of telling which way it is, so we can't precisely specialize
        // within `some_fn`.
        some_fn(foo, &x)
    });
    if random() % 2 == 0 {
        drop(ref_);
        None
    } else {
        Some(ref_)
    }
}
1 Like

I think there’s a similar example with NLL, even without any unsafe code, where you can have a pair of simultaneously-live lifetimes, that, depending on a branch destination, it is both possible that the first lifetime ends first, that both lifetimes are equal, and that the second lifetime ends first.

That means it is impossible to compare lifetimes at runtime - you could write a “self-referential case” where if you dispatch such that the lifetimes are equal, they will end up as unequal, and vice versa. With associated types being specializable, the way from there to transmute is pretty short.

3 Likes

They're not at the moment, and part of the thread is exactly about suggestion to change that for 'static lifetime.

I'd say this example should be fine and just not using specialisation at all - specialisation is meant purely as optimisation, and can be just omitted for complex examples like you presented while still used for explicitly static values.

I might be wrong, but it seems most of the counterarguments are about complexity of specialisation between any two arbitrary lifetimes - which is indeed hard if not impossible to do - but it still seems that specialising for explicit 'static lifetime should be quite possible and useful.

One thing I'm not clear on from the blog post is whether the exception to 'always applicable' for specialization predicates ("the exception") would apply when determining whether a specialization predicate impl itself is valid. Consider:

impl<T> UnconditionalSeek for S<T> where T: UnconditionalSeek { ... }

Does this impl satisfy 'always applicable' for the purposes of a implementing a specialization predicate? It does if we're allowed to ignore the specialization predicate on T in accordance with the exception, but the impl certainly doesn't apply unconditionally. The reason I'm unsure—aside from the language around the impl being "unconditional"—is because of the parenthetical in:

every one of their impls must be always applicable (our original predicate)

This could be read to imply that specialization predicate impls have to be 'always applicable' based on the original version of 'always applicable' that existed prior to the introduction of the exception.

It seems as though the version of 'always applicable' that includes the exception would be sufficient to make specialization predicates themselves sound, since it still prevents specialization based on lifetimes/type equality. I don’t know if there’s a broader reason why this wouldn’t work though, such as an interaction with associated types.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.