Pinned obviates the need for having two types and "anchor" theatrics. A type can just implement Future, and any methods that require access to a pinned future (like poll) can just accept self: &mut Pinned<Self> or self: &Pinned<Self> as needed. Essentially, a ?Move proposal could be exactly like the Pin proposal, except that Pin<T> is spelled &mut Pinned<T>.
You'd still need Unpin, in addition whatever trait bounds are needed to abstract over immovability (such as the arrangement I describe above). This makes the proposal more complex. On the other hand, you don't need Pin or PinBox; you can just add the necessary methods to build a Box<Pinned<T>> and have the proper bounds on any methods that need a movable referent. And you don't potentially need PinShr<T> either.
You'd still need a stack pinning API, since there's no way to initialize a Pinned<T> on the stack (unless &'init T were added to the language...). Though, a limited version of the previous ?Move proposal's move-until-borrow rule could be introduced just for this purpose without supporting it more broadly by having FnOnce::Output: ?Move and such.
Pinned would have to be immovable based on a marker trait, such as Pinned<T>: !Move. The tradeoff is between having a marker trait on the one hand and having to create new versions of each pointer/reference type that might contain a pinned value at some point on the other.
You could have Pinned<T>: Deref<Target=T>, then you get &Pin<T> → &Pinned<T> → &T (two hops). This wouldn't be compatible with Pin as a struct, through, since <Pin<T> as Deref>::Target would change.
Edit: I guess you could have Pinned<T> even without this proposal, and have the only place that produces a valid one be Pin::deref, making it forward compatible .
I guess it depends on how magic. If Drop moves the value before calling Drop::drop, then that won't work. Oh well.
Edit: And I guess the other problem is that the Drop impl would still be responsible for ensuring the fields can be safely dropped, which isn't compiler enforceable. You'd need a method like unsafe fn drop_pinned(self: &mut Pinned<Self>) in order to "prove" the contents can now be safely handled unpinned. It's added complexity all the way down.
This doesn't square with the proposal I just read from @steven099 - an async fn would evaluate to a Pinned<{anon type}>, but you can't have an extern type on the stack (and in general, the design requires you to be able to move the return type around until you want to start polling it, to pass it into combinators).
@steven099's explanation made sense - we bring back ?Move in its original incarnation. But we've already rejected that for not being backwards compatible in addition to its other negative externalities. This proposal adds a new type, as well as the Unpin auto trait, in addition to the ?Move trait, still changes the definition of Future and anything that cares about being pinned, and all just to avoid having two kinds of box.
True enough. I suggested a way up-thread that I think makes it backward compatible, but I recognize that this is a whole can of worms that the current Pin proposal seeks to avoid.
I don't think this addresses all of the incompatibility: the most serious problem was that its possible to be relying on a generic Fn::Output being Move today, but async fn (and generators) need to have an Output that is not Move.
I'm not sure that's true. IIUC, under the current Pin proposal these would just return a value that isn't pinned yet, and the caller would just be responsible for pinning the result before calling poll. There's no reason this wouldn't still work with a Pinned proposal.
Ah, that's true, if they don't return Pinned but instead you put them in a Pinned before calling poll somehow, this becomes much more similar to Pin (just you've added a ?trait to make Pinned more compositional).
(I don't really grok most of the technical details either, but it's good to know my high-level intuitions are at least in the right ballpark - this is also what I had been assuming.)
To me the birds-eye-level situation looks something like this:
Approach A: Pin<T> as a new pointer type (the "current plan")
Pros: Can be done entirely or almost entirely in library code. Conservative, minimal change. Things that don't care about pinning can continue not caring.
Cons: Ergonomic issues; compositionality issues. Everything that currently works with &mut T but doesn't need swap/replace and could also work just as well with Pin<T> would need to undergo API duplication if there turns out to be demand (use cases) for it.
Approach B: Pinned<T> as a pointee type plus ?Move or ?DynSized or whatever
Pros: More compositional, cleaner and more interoperable with existing code which can / wants to interoperate.
Cons: Significantly more invasive change; not clear if it'll carry its weight.
So basically, the "downside risk" scenarios are adopting "Approach A" now, and later it turns out the use cases for pinning are more widespread than we had expected; or, conversely, we adopt "Approach B", and it turns out to be just a niche thing, and we'd have added more complexity to the language than we strictly needed to.
Given that making predictions about the future is hard, the question this raises is: is "Approach A" forwards-compatible with "Approach B", or can we make it be? That is, can we later add Pinned<T> and ?Move, declare that type PinMut<'a, T> = &'a mut Pinned<T>, and everything keeps working? If that is the case, then we can adopt "Approach A" for now with no reservations, and defer the question of 'upgrading' to "Approach B" into the future when we will know more about our requirements.
The key forwards compatibility hazard from approach A to approah B is the Pin<T>: Deref<Target=T> impl. I’d have to look over everything again to see if there’s anything else. I mentioned introducing Pinned in a more limited capacity (have it only show up in Pin<T>: Deref<Target=Pinned<T>>) upthread. Alternatives would be to just force people to migrate from Pin to &mut Pinned (bleh), or not impl Deref, relying instead on a method.
Yeah taht's what I now thought we are talking about. Essentially, Pinned<T> is "just another newtype" around T, except that it's magic because the compiler "forgets" the sizedness. We have some privileged API that transmutes (or so) &mut T into &mut Pinned<T> (e.g. if T: Unpin). PinBox<T>... would maybe just be Box<Pinned<T>>, and we provide an API that internally transmutes from Box<T>? (I guess someone already said it, but now I understood it, too. ) Then getting the &mut Pinned<T> wouldn't even be a new thing, it'd just be Box<T> as DerefMut. Reborrowing would also just work.
I feel like this works "too well" and there is probably a catch somewhere. But this does seem to provide the nice compositionality that we can add it to any kind of pointer. It does require language support up-front though (and not just as a nice-to-have future extension) to make sure Pinned<T> is considered unsized and whatnot. And what happens if T is unsized as well? &mut Pinned<dyn Trait> is something we'd want, I guess, because we care about Future being dyn-safe. This should be a fat pointer.
I only have a very faint idea how this could look like in the model---unsized types are something I avoided so far---but it'd probably still involve having dedicated modes. After all, we have to say what Pinned<T>.own is, and it's not going to be T.own; I guess it'd be T.pin.
Actually, I just made an interesting observation... I've been wondering for a while whether T.own should really operate on lists of bytes, like it does now. Maybe it should work on a pointer instead. That'd make it more uniform with T.shr and also avoid the problems (that also came up in my posts) that lists of bytes are rather annoying to work with; usually you want to see them as corresponding to something higher-level that just can be laid out in memory. So, let's say T.own(bytes: List<Byte>) changes to T.own(ptr: Pointer). Now (if we ignore sizedness), we still are able to move all types, so we will want to have an axiom that lets us extract, from T.own, ownership of the memory behind ptr. But ownership of how much memory? size_of<T>() many bytes, of course! Now, what if we don't know the size? Well, in that case it seems we cannot even ask for the ownership we want, so the axiom doesn't happen... so the type is immovable! This ignores entirely the possibility of determining the size at run-time, obviously. But I guess what I am saying is that, in the formal model we can make the same observation as what has been made above (and maybe I shouldn't be surprised): !DynSized types are inherently immovable. Their T.own degenerates to just a predicate over a pointer with no restrictions (other than some relationship to sharing), which is exactly what T.pin is. Suddenly, "Move iff DynSized" seems much less ad-hoc to me. Both are essentially just "we can say something about the pointer (T.own(ptr)), but we have no idea whatsoever what we can do with it (no axioms that T.own has to satisfy)".
So, the fact that Move implies DynSized shouldn't be surprising, that's just practically necessary. It still seems funny that we'd want to consider types that actually are movable, to be not movable. But what is a size good for if we don't want to move? Well, we need it for layout, but Pinned would not be used in type definitions---i.e., (Foo, Pinned<Bar>) is not a thing we even want. It's just used behind a pointer indirection. Layout computation never even sees this type. From all I can see, it should behave just like an extern type, except that we probably (?) don't want to allow extern types to have type parameters.
Something interesting seems to happen when considering RefCell<Pinned<T>>. Am I mistaken, or does this give us “for free” the full API set of RefCell in a way that preserves pinning? We could call borrow or borrow_mut to eventually arrive at &Pinned<T> and &mut Pinned<T>, something that would needs tons of duplication to get working with the pinned reference proposal.
Now, we can’t create a RefCell<Pinned<T>>, but we could add a single method to RefCell that can turn a &[mut] Pinned<RefCell<T>> into a &[mut] RefCell<Pinned<T>>. That would be like a “one-liner to opt into pinning with the full API surface” (and it’d incur a hell of a proof obligation). Well, two-liner because we have shared and mutable references.
This seems like great news, but there’s a catch… in the pinned world, this is actually unsound because of the Deref impl! If we can turn &Pinned<RefCell<T>> into &RefCell<Pinned<T>> using the above method and into &RefCell<T> using a Deref impl, we have a problem. (Restricting the conversion method to just mutable references, i.e. &mut Pinned<RefCell<T>> to &mut RefCell<Pinned<T>>, does not help.)
So, all this magic wrapping seems nice, but it also seems incompatible with that Deref impl.
Unless calling that conversion function sets a bit flag preventing future unpinned accesses. But yeah, otherwise it’s incompatible. Of course, pretty much all interior mappings (including field access) are unsafe, so if the conversion were unsafe, a pinned type could expose a &RefCell<Pinned<T>> and enforce the contract ("no &RefCell<T>") through encapsulation. That’s one thing that’s always going to be a challenge about pinning under either approach: just how much pinned types are responsible for maintaining soundness in their public APIs.
The conflation with “unsized” doesn’t actually make sense. Something without a dynamically known size can’t be deallocated, for example. This would imply that we leak all generators, and there’s no reason for it.
How would that even work? borrow and borrow_mut don't know if they are being called in a pinned or unpinned way.
Ideally they wouldn't be. But we still have the same problem around Drop as Pin.
That would amount to the wrapping type upholding the invariants of RefCell. What I was looking for is a safe RefCell. It seems that to get that with Pinned, we have to either still duplicate the API surface, or we have to remove the Deref.
Ah, I forgot deallocation. Good point.
I was about to say that only some parties will forget the size, because we are casting pointer types around, but whoever is responsible for deallocation still knows the size. For example, someone would own a RefCell<T> (e.g. in a stack pinning API) but provide a &mut Pinned<RefCell<T>> to others. Deallocation is not affected.
However, PinBox is affected. We can't actually define it as Box<Pinned<T>>. For every pinned object, there has to be something still holding it at the original type for the purpose of deallocation. So, PinBox would still have to be a separate type I think. But, in contrast to Pin, everything that works with &mut T where T: ?Sized would work with pinned data "for free".
No, our dealloc API passes the Layout so the allocator doesn't have to know.
This seems like a very small gain. I don't know any methods like this I would want to pass a generator to in the first place - generators implement no traits, after all, and you can't do almost anything at all with &mut T where T: ?Sized.