# Can `Pin` own its referent instead of borrowing mutably?

I’d also like to make a more higher-level comment:

You seem to be worried that we have a combinatorial explosion of references. I fully sympathize with that! This combinatorial explosion of references is paralleled by a combinatorial explosion of “typestates” and invariants, and that worries me very much. It’s why I was opposed to shared pinned references. So I very much agree there is a problem here with `Pin` that I’d like to see solved some day. I just think that sizedness is not the solution.

On the formal model side, I actually have some thoughts for how to reduce the number of invariants. At some level it’s just a mathematical trick, but I think it could actually be nice. Instead of having four of them (`T.own`, `T.shr`, `T.pin` and `T.shrpin`), we could have just two of them, with types like

``````enum PinState { Unpinned, Pinned }

// This trait defines which invariants make up a Rust type.
// It's an unsafe trait because the invariants actually have to satisfy some axioms, as discussed in my blog posts.
unsafe trait Type {
fn own(self, pin: PinState, ptr: Pointer);
fn shr(self, pin: PinState, ptr: Pointer, lft: Lifetime);
}
``````

(I am using you as a guinea pig for an even more Rust-y syntax for all this math. Let’s see how that goes )

So, it wouldn’t actually be four invariants, it’d just be two parameterized invariants. That will probably be much more convenient to work with.

Now, I said this is just a mathematical trick because the amount of information is still exactly the same – instead of two functions, I have one function taking a two-element type. That’s the same thing. But it may be informative, and it may even feed back into language design. This approach invites us to think of `pin` not as a reference type, but as a reference modifier. So, there wouldn’t be `&pin`, there would be `&[list of modifiers]`. We could have `&pin`, `&mut pin` and `&move pin` (assuming `&move` ever becomes a thing). Maybe even `mut` and `move` could become a modifier, so a reference is determined by

• `mut`, `move` or shared (with no keyword)
• `pin` or not

There could be some kind of modifier polymorphism as well—something we anyway want to shared and mutable references, to avoid writing every function on slices twice.

Given that the invariants of the owned and pinned are meaningfully different, I don’t think we want to conflate concepts here. But It would be nice indeed to to be able to talk about references in a more general way, without stating the exact typestate the referent is in.

2 Likes

Sorry, I guess I wasn't clear enough.

Calling a generator function returns, by value, a type `MyAnchor` satisfying `MyAnchor: Anchor, MyAnchor::Inner: Generator`. For clarity I'll refer to `MyAnchor::Inner` as `MyGenerator`.

`MyAnchor` is not a pointer. In fact, it contains the generator function's state, and is sort of the "real" type of the generator, whereas `&MyGenerator` is a "fake" type that actually points to `MyAnchor`. (`MyGenerator` never exists as a value itself.)

The reason it makes sense to have a separate type is to represent the fact that the generator can be moved before being pinned. So `&mut MyAnchor` is like `&mut MyGenerator` under the existing design, whereas `&mut MyAnchor::Inner` is like `Pin<MyGenerator>`.

`MyAnchor`, the real type, is `Sized`. `MyGenerator`, the marker type, is `!DynSized` which means you can't try to deallocate it – but that's okay, because it should only ever exist as a reference.

True, but for now it is less ergonomic, and in the long term it still means twice as many reference types. Also, unless the unconditional `Deref` impl on `Pin` is removed, a backwards compatible builtin version won't work properly with self-referencing types containing interior mutability.

It is a local extension. It may hypothetically be incompatible with some existing unsafe code that uses `size_of_val` like the `Box`-to-`Rc` conversion (specifically, using it as a size to memcpy), but with `&mut T` instead of any kind of owned `T`, but for something other than swapping (which is why your example below is OK). As I said, I suspect that no such code may exist, and in any case the `DynSized` RFC would fix this properly. Also, the same issue applies to all uses of `extern type`.

If I understand correctly, this would not be unsound under my proposal. Under the initial implementation where `size_of_val` would return 0, it would succeed but do nothing. If the `DynSized` RFC is implemented, it would trigger the `size_of_val` panic, but `swap_unsized` would also get linted for using `size_of_val` without a `T: DynSized` bound.

Now I'm even more confused. What prevents you from moving around `MyAnchor` between calls to `resume`?

The fact that `Anchor::get_mut` is unsafe, and should only be called by types such as `PinBox` that ensure the anchor will never move again.

Why that? The built-in version just provides nicer syntax and automatically performs some operations that are already sound to do now (though some require unsafe code). I see no conflict between `&pin` and the `RefCell`. The concern about `Pin::deref` is entirely orthogonal to having nice syntax and automatic reborrowing.

How would that code look like?

Oh, you're assuming pinned things would be `!DynSized`. I see. Makes sense. (Also this is an example where returning `0` would have catastrophic effects as the vtables got swapped. Yet another reason why `size_of_val` should panic or error at compile-time, but not return bogus fake data. But that's a separate discussion.)

My point is that `Pin::deref` is the only reason that the existing `Pin` design largely provides compatibility with existing generic code taking `&T`. If that's removed, you'd have to use `&Pin<T>`, which is not as nice because you need something keeping the `Pin<T>` alive – in particular, as I've said, you can't go from `&Pin<SomeStruct>` to `&Pin<SomeField>`. But if it's not removed, then it won't be a good long-term design (thus, not a good basis for `&pin`) because of the interior mutability issue. On the other hand, my proposal completely avoids this issue because `&T` is a pinned reference.

There could be some kind of modifier polymorphism as well—something we anyway want to shared and mutable references, to avoid writing every function on slices twice.

Such a feature definitely sounds useful for the subset of cases where `&` and `&mut` both work. But it will necessarily have its share of mental overhead. It's better if we can avoid foisting it on everything that takes `&T` and doesn't try to move it (but also doesn't care if it gets moved later).

More generally, I claim that almost all generic code falls into one of two categories, for a given generic parameter `T`:

• Handles `T` by value: currently requires `Sized`; with the unsized rvalues RFC, could extend to non-`Sized` but `DynSized` types. Doesn't make sense for FFI `!DynSized` types because the size is unknown. Also doesn't make sense for immovable types because handling by value = moving.

• Only handles references to `T` (immutable, mutable, or even a hypothetical `&move`): currently works with `!Sized` types; should work fine with `!DynSized` types as well as immovable types.

Since whether a given piece of code works with immovable types and with `!DynSized` types is very highly correlated, it makes sense to "conflate" them somewhat.

Now, they don't need to be completely conflated. My thinking is that we'll eventually want a separate `Move` trait, with this hierarchy:

`Sized : Move : DynSized`

Why that particular hierarchy? Well:

• `Sized` must inherit from `Move`, because existing generic code with only `T: Sized` as a bound expects to be able to move `T`s around. [...well, an alternative would be to have all parameters have a `Move` bound by default, but that would make immovable types nearly useless.]

• `Move` may as well inherit from `DynSized`, because it makes no sense to have a type that's `!DynSized` (meaning, at least naively, that Rust has no idea what its layout is), yet can be moved.

There's no real reason why we couldn't add `Move` up front (other than making for a slightly more complex proposal), but it should be a fully backwards compatible change to add it later, and change generators to impl `DynSized` but not `Move`. After all, adding trait impls is generally backwards compatible.

You might argue that `Sized: Move` still makes no sense, and if my proposal requires it, then that's just another reason to reject my proposal. And... you'd have a point. But I still think it's better than adding a whole new family of references for not much benefit.

Er… right, that'd be true if you converted the generators to trait objects. But otherwise there'd be no vtable.

The interior mutability issue is not helped by making this built-in syntax. I don't see the connection.

This is a completely separate question, and your concern is resolved by having a `PinShr` with the ability to go from `PinShr<SomeStruct>` to `PinShr<SomeField>`. Having that is orthogonal to having `Pin::deref`, and it is orthogonal to having built-in syntax.

However, this is indeed a good counter-argument for "just use `&Pin<T>`", which I have been saying. I am not saying that any more. Based on what I realized when writing my last log post, I am saying, either we abandon the entire pinned-shared thing (but nobody wants that), or we have a dedicated type of shared pinned references. I guess in built-in form, we'd eventually have `&pin` (shared) and `&mut pin`.

Indeed I am arguing exactly that. This prevents you from composing futures by putting two of them in a pair. Why is that not a big deal?

Not orthogonal

`PinShr<T>` would work, and `&pin T` would work better, but neither would be compatible with existing code or traits that expect `&T`. `&Pin<T>` would be compatible but it has the limits mentioned. `&T` would be compatible but it requires the problematic `Deref` impl.

This is not an obscure use case! For example

1. Let's say we want a self-referencing type to impl `Debug`. It could be any self-referencing type: a future combinator, a builtin async function (which could at least print the file:line of the function, maybe even the stored variables), or something totally unrelated to async/generators, the kinds of general self-referencing structs that are currently the domain of `rental`.

Well, `Debug` takes `&self`. Assuming we don't have a `Deref` impl for `PinShr<T: !Unpin>`, the only option is to impl `Debug` for `PinShr<T>` and take `&PinShr<T>`. In this case, that's not the end of the world since the caller can create the reference, but it's a double pointer, which is inefficient.

2. What if we want to impl one of `Index`, `Borrow`, `AsRef`, or `Deref`, among other similar traits? Not so useful for futures/generators, but perfectly reasonable for general self-referencing structs. All of these traits have signatures like

``````fn deref(&self) -> &Self::Target;
``````

In this case, impling for `PinShr<'a, T>` doesn't work well, because the result reference will only last as long as the caller's temporary borrow of the `PinShr`, not for all of `'a`.

3. Of course, there's another problem: what if `Self::Target` is itself immovable, so we want `PinShr<'a, T> -> PinShr<'a, Target>`? Well, each of those traits has a mutable counterpart, and in theory we could create new immutable-pin and mutable-pin variants of each – either as separate traits, or someday with some kind of abstraction mechanism for reference modifiers, as you've alluded to. But:

• Creating separate traits would add quite a lot of boilerplate to both the standard library and custom container types, so I don't see it happening.
• An modifier abstraction mechanism would definitely be nice to have, if only to unify the immutable and mutable variants of the traits (and maybe someday move variants). But it sounds pretty far off, considering that for now we don't even have a sketch of a design. And even with an abstraction, doubling the number of cases (to 4 or 6) still adds complexity and mental overhead.

It doesn't, not exactly, as you can still compose anchors. This does make the implementation of future combinators a bit more complicated, and it's the part of the design I'm least confident about. My dream is that in the long run, construction in place will 'just work' with both function calls and returns (I detailed a possible design in another thread), so there will be no need for anchors; you just deal with `!Move` types directly.

But that's just a dream, so what would it look like for now? Well, combinators would use the same two-type approach as generator/async functions. Here's an example based on `Join` from the futures library, a combinator that combines two futures:

``````trait FutureAnchor = Anchor where Self::Inner: Future;

// All of this stuff is boilerplate that could be generated by a macro: {

// The anchor itself:
struct JoinAnchor<A: FutureAnchor, B: FutureAnchor> { a: A, b: B }
// The marker type, which impls Future.
extern { type Join<A: FutureAnchor, B: FutureAnchor> };

impl<A, B> Anchor for JoinAnchor<A, B> {
type Item = Join<A, B>;
unsafe fn get_mut(&mut self) -> &mut Join<A, B> {
&mut *(self as *mut _ as *mut Join<A, B>)
}
}
// Private helpers:
impl<A, B> Join<A, B> {
fn cast_to_join_anchor(&mut self) -> &mut JoinAnchor<A, B> {
unsafe { &mut *(self as *mut _ as *mut JoinAnchor<A, B>) }
}
// Field accessors:
// Note that these call `Anchor::get_mut()` on the anchor fields, returning
// the inner futures themselves.
fn a(&mut self) -> &mut A::Inner {
unsafe { self.cast_to_join_anchor().a.get_mut() }
}
fn b(&mut self) -> &mut B::Inner {
unsafe { self.cast_to_join_anchor().b.get_mut() }
}
}

// } End boilerplate.

impl<A, B> Future for Join<A, B> {
// ... same as existing impl, but using a() and b() accessors
}
``````

This requires unsafe code for the same reason that in a `Pin` version, unsafe code would be required to go from `PinShr<Join<A, B>>` to `PinShr<A>`: you have to promise that you won't try to move one of the struct fields by hand. However, the unsafety can be entirely encapsulated by the (hypothetical) macro: it would only expose the accessor methods and prevent the user code from directly accessing the fields of `JoinAnchor`, e.g. by using an internal `mod` declaration and privacy.

1 Like

By the way: I’m silly and only just realized that making `size_of_val` panic for `!DynSized` can be done as a pure library change. And even if it couldn’t, it would be a straightforward compiler change, not something complicated and unspecified like, say, implementing native immovable structs would be. So let’s just call it part of the base proposal; forget the whole thing about it returning 0 to start with, and the resulting unsoundness question.

I think @comex 's proposal could be fixed/generalized like this:

Define

``````extern
{
type NoMoveExtern;
}

// T could also be PhantomData<T> (we aren't constructing or dropping this type, so it doesn't matter)
struct NoMove<T>(T, NoMoveExtern);
``````

Now we replace PinMut<'a, T> with &'a mut NoMove<T> and PinShr<'a, T> with &'a NoMove<T> and otherwise keep the same behavior of the pin RFC. Due to NoMove<T> being !DynSized, this should be an equivalent formulation since it can’t be swapped out.

Thus, traits like Generator could be implemented for &mut T where T: ?DynSized rather than PinMut<T>, and there would be a blanket impl of Generator for NoMove<T> where T: Generator (movable generators would rely on that, while immovable ones would implement Generator for NoMove).

If it works, it seems this might be a bit better than PinMut/PinShr since it doesn’t need a new reference type.

Having a ?DynSized bound on Self for traits by default would be required though to make traits supporting immovable types not being special at all, and I think this is incompatible, but maybe it could be changed with a Rust edition.

Note however that you can no longer do “-> impl Generator” for such traits, but it will need abstract type syntax, so you can say “abstract type T where NoMove<T>: Generator” and then do “-> T”.

1 Like

What is the case for a type that is `!Move` and yet `DynSized`? It seems like for such types I could write my `swap_unsized` function. So any actually immovable types likely have to be `!DynSized`. Right?

Yes, I was expecting that. New reference types come with new matching traits. So, all of this is still "just" spelling out the cost of having more reference types.

So if I understand correctly, this boils down to whether we want to have a new reference type (or, rather, two of them), with all the baggage that implies; or whether we want to "hack" something together based on `!DynSized`. Both have ergonomics hits:

• A new reference type requires support in all container-like data structures if we want to have pinned stuff in them, and new `Deref` traits if we want smart pointers and the like.
• `!DynSized` requires `Future` implementations to actually have two types, two trait impls, and some non-trivial relationship between them. (Notice that formally, having two types is like having twice as many invariants--so the owned and shared invariant of the `extern` type would correspond to the pinned and shared-pinned invariant in the `Pin` proposal.)

For futures alone, there are likely going to be far more implementations of the trait that people wanting to call `poll` directly. So in that context---right now the main motivation for pinning---I think `Pin` has significantly less boilerplate. Also notice that the vast majority of existing smart pointers are unsound for `Pin` anyway.

Now, much of the additional complexity in `Pin` (the part that requires changes and consideration in containers, like adding `Pin`-related methods to `Vec`) is to be able to safely obtain pinned references. I noticed your `Anchor::get_mut` is unsafe. Is there any way, in safe code, to call `resume` on a future? With `Pin`, there is, thanks to `PinBox` and the possibility of a stack pinning API. A version of `RefCell` that hands out pinned references also seems feasible, and one could imagine adding a `get_pin(self: Pin<Vec<T>>, index: usize) -> Pin<T>`. How would any of that look like in your model?

For more general use of pinning, like for intrusive collections, the trade-off might shift. However, as you observed, `Pin::deref` mitigates much of that. I find this function funny, but less funny than using sizedness and the only concrete issue we have so far is that we cannot have `fn get_pin(self: Pin<RefCell<T>>) -> Pin<T>` just return a reference. However, so far I don't see how your model handles safe creation of "pinned" references at all, so this disadvantage on the `Pin` side is far outweighed on the `!DynSized` side by not even being able to express this pattern. This is similar to the smart pointer situation btw; how would I obtain (in safe code) something like `Arc<MyGenerator>`?

(Based on @glaebhoerl's `RefCell` in the other thread, we actually can have `fn get_pin(self: Pin<RefCell<T>>) -> Pin<T>` if we make it first set the pinned bit of the `RefCell`, and then return. This is compatible with `Pin::deref` and fixes the unsoundness I discovered in the RFC. That code will then instead panic, which is not entirely satisfactory but then this is `RefCell` we are talking about, which is all about run-time checks instead of static checks.)

1 Like

Whoa, this thread really blew up. @comex, it’s probably too late now, but I think your proposal would have been better as a separate thread. It was interesting to read the discussion on it though.

Unless you have exactly one `!DynSized` type like `Pinned<T>` which just serves as a wrapper, allowing you to have `&mut Pinned<T>` or `&Pinned<T>`. This looks very similar to having `Pin`/`PinShr`, except that code can be generic over `&mut T`/`&T` and work with pinned types where `T: ?Sized`. You can imagine this migrating to native, with `&pin T`/`&mut pin T`, but without having to solve the 'generic-over-`pin`' problem, since `pin` would modify the type rather than the reference. You'd still have the issue `Pin` does of having to encapsulate field accesses in the absence of native support.

Yes, there is. I mentioned this earlier in the thread, but maybe it was a bit buried – under my proposal, `PinBox` would still exist. You give it a `Box<T>` for `T: Anchor`, and it impls `Deref` and `DerefMut` with `Target = T::Inner`. I'd expect that a stack pinning API would also be translatable from Pin to this system, but I'd have to see the specific design.

`get_pin(self: Pin<Vec<T>>, index: usize) -> Pin<T>`

I'm pretty sure you only need `&mut Vec<T>` for that, which translates easily. Alternatively, `Pin<[T]>` could also work… you could translate that if the standard library had an `impl<T: Anchor> Anchor for [T]`.

(Edit: never mind I'm dumb. `&mut Vec<T>` wouldn't work because it could reallocate, and `Pin<Vec<T>>` would work because would inherit `!Unpin` so you couldn't call any of the normal `Vec` methods on it. But I'm not sure why you'd want to use that over `Pin<[T]>`.)

Yeah, that's actually what I first thought of, and then for some reason I decided it would be better to have two separate types. But… I'm not sure why. After all, if you want to give the pinned type a name, you can just make it a type alias for `Pinned<T>`, but `Pinned<T>` has advantages in other cases. So your version is probably better.

Note that `Pinned` should be marked as `#[fundamental]` to allow implementing traits for `Pinned<MyType>`.

Sorry.

Discourse has some kind of feature for splitting threads off, I think even after the fact, but I've never tried it. (Just seen e.g. @nikomatsakis use it before.)

Hmm. Given that (IINM) the concept of "temporary pinning" is by definition impossible (the guarantee provided by `Pin<T>` is that it will never again be moved), it does seem to make some intuitive sense for "pinnedness" to be a property of the type rather than of the reference. Just on the level of hunches though...

(But I think if you have a `Pinned<T>`, and then `Box<Pinned<T>>`, `&mut Pinned<T>`, etc., which is what this'd suggest, you're back to needing some kind of marker `trait` to control movability, because otherwise `mem::replace()` works for arbitrary `T` including `Pinned`, which is clearly no good. Using `DynSized` as this trait feels kind of strange and hack-like to me, but it's possible that that superficial impression is a mistaken one. I dunno.)

Well, yeah. This is predicated on having a marker trait. Absent that, you always need to use (potentially special) reference/box types that can’t be moved out of to protect that invariant, such as in the existing `Pin` proposal.

`!DynSized` seems like it would work, but it does seem a little hackish. Logically, movability and sizedness are related, but their relationship in this proposal is wrong. Sizedness (at least dynamically) is a necessary but not sufficient condition for movability, but this proposal makes it necessary and sufficient, forcing anything that wants to be immovable to be unsized even if it logically does have a knowable size. Of course, the purposes of both `Sized` and `DynSized` are mostly to restrict static and dynamic moving of values respectively. If you renamed `DynSized` to `Move`, however, it wouldn’t make sense for it to be a constraint on `size_of_val`—except insofar as the size of a value is only really useful for moving into/out of it—or for it to be a constraint for `Sized`. The convenience of this whole proposal relies on the `?Sized` annotation stripping movability, and the `DynSized` proposal relies on constraining `size_of_val`. You either have to accept that making something immovable means making it notionally unsized, or abandon this proposal and return to `Pin`.

Edit: `!DynSized``!Move` would mean there couldn’t be a `&Pinned<[T]>``&[Pinned<T>]` coercion similar to what’s been proposed for `Cell` without breaking the idea of sizedness, which would be unfortunate. Unless you can think of a way to have an array of unsized things in contiguous memory...

1 Like

Edit: Maybe instead of having two traits `DynMove: DynSized` and `Move: DynMove + Sized`, there could be a single trait `Move: DynSized`. Pro: One less trait to worry about. Con: The fully explicit bound that used to be `T: Move` becomes `T: Move + Sized`. This only really affects traits, and it would still be equivalent to `T: Sized`, but it means you need two (usually implicit) bounds to be able to statically move something.

One way to reconcile the two worlds:

• Split `DynSized` into `DynSized` and `DynMove: DynSized`, with every aspect of the `DynSized` proposal replicated for each. Use `DynSized` to constrain `size_of_val`, and use `DynMove` pretty much everywhere else. Immovable types become `!DynMove`.
• Introduce `Move: DynMove + Sized`.
• `Sized` remains a default bound. `Move` is a default bound for all `T` where `T: Sized` (explicitly or by default), including `Self`. This is so that an explicit `Self: Sized` bound implies `Self: Move` (otherwise most code that does this would break).
• `?Sized` removes the `Move` default bound as well, which follows from the previous point. This means that immovable types work with all existing `?Sized` type parameters, and most code using immovable types will continue to use a `?Sized` bound.
• Anything other than `Self` that wants to work specifically with sized but potentially immovable types can switch to `?Move`, which is not compatible with `?Sized` callers, requiring migration anywhere this applies.
• To have a sized but potentially immovable `Self`, you need `Self: ?Move + Sized`. This follows the pattern of `Self` needing to be `Self: [constraints] + Sized` in order to have the same constraints as another type parameter `T: [constraints]`.

Pro: `Sized` and `Move` are retained as distinct properties.

Con: `Sized` implying `Move` by default in traits is a bit weird… and we were worried about the original proposal being hackish . This could at least be mitigated by adding a warning recommending these instances migrate to having explicit `Move`/`?Move+Sized` bounds instead, at the cost of increasing the encouraged migration footprint.

Re whether this is on topic: yeah, oops. Though, technically, the topic is about `Pin` owning the pinned value. One way would be with `&move` and `PinMove`, and another would be with `!Move` and `Pinned`, so…

1 Like

It looks like there will need to be a lot of pin-specific types – besides `Pin`, `PinBox` and potentially `OwnedPin` (the pinned version of `&move`), there's also a need for `PinMutex`, and possibly even `PinRc` and `PinArc`, because both `Arc` and `Rc` have `get_mut` and `make_mut` methods. The pinned versions of those types can be avoided by using `Pin<'a, T>` or `PinBox<T>` as the type parameter instead of `T`, but that is probably less efficient, requiring an extra pointer dereference and, in the case of `PinBox<T>`, an extra heap allocation.

With actual immoveable types, we wouldn't need any of these new types – just saying

Honestly I’m still trying to wrap my head around what the proposal in this thread even is. We seem to be throwing every feature in the book at this problem with limited results. I’m going to discuss it using `async fn`, rather than generators, because they’re simpler and the differences don’t matter.

At first I thought the extern types and DynSized stuff didn’t do us anything at all. Here’s an alternative, with none of that, that seems provide the same guarantees:

``````async fn foo() -> i32 { .. }

// async fn returns:
struct AnonAnchor(AnonFuture);

impl Anchor<AnonFuture> for AnonAnchor {
// Never call this unless AnonAnchor has been pinned.
unsafe fn get_mut(&mut self) -> &mut AnonFuture { &mut self.0 }
}

// another compiler generator structure
enum AnonFuture { ... }

impl Future for AnonFuture { ... }
``````

That is, `AnonFuture` is just a normal `Sized` type, but to get it you need to call `get_mut` on `AnonAnchor`, which is unsafe. It was hard for me to see what advantage the proposal here had over this, but eventually I figured it out - because the `AnonFuture` type does not implement some ?trait (I’m going to say `?Move`, because I think the conflation between `?Move` and `?DynSized` is wrong & a distraction for other reasons), you can’t pass it to any API that might `mem::swap` it. This means that once you have `&mut AnonFuture`, you can stop worrying about safety because `?Move` will take over.

Unfortunately, I don’t see how this meets our requirements:

• How do you call a combinator on the return type of an async function? You can’t as far as I can tell - because `AnonAnchor` can’t implement `Future`, only `AnonFuture` can. There’d have to be some sort of map function (which would have to be unsafe for the same reason `Pin::map` is unsafe) to go from `AnonAnchor<AnonFuture>` to `AnonAnchor<Combinator>`. I don’t see how you call combinators on an async fn without unsafe code under this proposal.
• Even setting aside that, now every combinator that wants to be able to handle an async fn has to be add the `+ ?Move` bound. This is essentially just the opposite of today, where if they want to do something that doesn’t support async fn, they have to add `+ Unpin` bounds. But most combinators support async fn fine, so this is a net loss.

What you’ve done, as far as I can tell, is change the definition of `?Move` slightly, and make it safe by adding this Anchor indirection. In the original proposal, `?Move` did not mean “can never be moved,” it meant “can not be moved after the addressof operator has been applied to it.” By conflating `?Move` and `?DynSized`, you suggest a definition in which it can never be moved, in which case you need this anchor type to be able to move it around before you box it. But having done that, you have not actually solved the combinator problem, which the original `?Move` proposal did at least solve.

(You have, however, solved the backcompat problem with `?Move` -Fn types don’t need to return a type that doesn’t implement `Move` - so that’s something.)

And I’m still concerned about adding ?traits to the language at all, because of how infectious they are. Any API that takes a generic by reference now needs to consider not only if needs the reference to be `Sized`, but if it needs it to be `DynSized` or `Move` or whatever additional ?traits we add. In contrast, the impact of `Pin` is well-contained to the APIs that care about it).

All in all, I am not optimistic that this will provide a more fruitful approach than `Pin`.