Unsoundness in `Pin`

Just to check my understanding, what makes these types special is precisely the safe wrappers that give you Pin<P> for Box, &mut, and &. In other words, if we add a new safe wrapper, we have to check for similar exploits with those types, but in general we don't have to worry beyond that?

I'm not sure that works -- reservation impls were designed to allow overlap from downstream crates. They act in somewhat surprising ways, IIRC. See the tracking issue for more details.


This seems much more defensible than the "fake clone impls", which are quite hard for me to understand. We could certainly consider phasing it in with a warning period, given the soundness justification. There is (additionally) a need for a "pure deref" concept, though, and I hesitate to have too many marker traits -- but I suppose we could add this one as and then layer in a pure one that (likely, but not certainly) subsumes it.

1 Like

Interesting indeed. I'm curious whether anyone can estimate the likely breakage from the various proposed fixes?

1 Like

Nice catch! Note that your example can also work with coercion, but it's definitely true that coercion isn't necessary to construct a Pin with a 'malicious' DerefMut impl.

That sounds right to me. Would that imply that Pin::new_unchecked could (conceptually) have its requirements relaxed? That is, it is sound to call Pin::new_unchecked with a type with a malicious DerefMut impl, so long as you do not implement the unsafe trait? (Note that I don't think you would ever actually do this in practice).

Based on the points brought up by @comex and @RalfJung, I now think the best solution is to guard the usage of the Deref and DerefMut impls by an unsafe trait.


Well if there's going to be a 2021 edition, could this be a candidate for one of the breaking changes?

1 Like

Not really. If we're going to fix this, I think we would fix it universally. Editions are good for "surface level" changes where we wish to keep both old/new working (and interoperating). They don't really apply to things like "which traits you have to implement".


This matches my understanding of the issue.

Basically, the function that returns a Pin<&T> has an obligation to show that the Deref/DerefMut impls are "good", but it turns out they are failing to actually satisfy that obligation. They are currently implicitly reasoning based on "there is no DerefMut", but alas, turns out that's not universally true.

Good question. I think yes.


That's correct as to the exploits that don't involve coercion.

For coercion (method 3), Pin::new is a safe function that gives you a Pin<P> for arbitrary P where <P as Deref>::Target: Unpin. If you could then coerce it to Pin<Q> where Q implements Deref with a different Target, it would be unsound. But that requires P: CoerceUnsized. For now, out of the builtin/libstd types that implement CoerceUnsized, none of them allow a downstream crate to add its own Deref implementation; and implementing CoerceUnsized oneself is unstable. But that could easily change. As such, we almost certainly want some kind of restriction on Pin's CoerceUnsized impl.

I don't think this works. A coercion like Pin<Box<MyFuture>> to Pin<Box<dyn Future>> seems like a pretty normal use case for real code – where MyFuture may or may not implement Unpin, and dyn Future definitely does not.

Would we still need that with the unsafe-marker-trait proposal? The entire assumption about well-behaved Deref would not be part of the Pin invariant any more... and isn't that the only reason why Pin::new needs side-conditions? I might be missing something here but I feel we could have a safe Pin constructor for any type; it just wouldn't be very useful unless the type also implements the (unsafe) marker trait.

Hey, was PTO the last two days. Good catch.

So yea, the issue identified here is pretty plainly that we didn't consider manual implementations of DerefMut and Clone for reference types when we generalized from PinMut/PinBox to Pin. Such impls are obviously absolutely absurd in practice and the most charitable thing you could say about them is that they would result in extremely confusing code.

My immediate inclination is to think of this as a problem with how we integrate Deref/DerefMut/Clone with the reference types. We operate on the assumption that these traits are effectively blanket implemented (or blanket not implemented) by these types, but the reality is more complex and creates a weird semantic loophole.

I've only read a bit of the discussion so far, so I'm going to do that and think more before deciding what I think we should do about this.


Okay, I've read the thread. I think CoerceUnsized should be left as a blocker on stabilizing that trait and set aside for the moment (edit: but my opinion is along the same lines as the rest of this post - it should be a requirement of coerceunsized, enforced however we see as best, that the target of the deref doesn't change).

I think we should essentially reserve the other two impls unless a crater run shows it would have a really negative impact on some users. While we can add a special work around for Pin with a marker trait, my belief is that users should be able to assume that for<'a, T: ?Sized> &'a T !: DerefMut et cetera. It's common sense. I would only want to add a pin-specific marker trait if this were not possible.

We should also explore if there are other impls that #[fundamental] allows on references that completely defy common sense. Nothing strikes me.

So maybe not reserve through the same mechanism as that but my opinion is we should treat the fact that users can write these impls as the soundness hole (rather than something specific to pin) and find a solution to that.

EDIT2: not sure how the attribute interacts with specialization, but I would hope it would treat this impl as unspecializeable if its a non-marker trait with all non-default items. If not, that change seems appropriate? In which case, I believe it works to just reserve these impls?


This only solves the problem for references though. We would have to additionally augment the Pin::new docs and add some strong warnings that the pointer type must either implement or "reserve" both Clone and DerefMut, otherwise there could be unsoundness. Reservations are rustc-internal, so this breaks the promise that Pin can work with user-defined smart pointer types.

Rc and Arc also don't have DerefMut, is that a problem?

And after doing all of that, we are still relying on subtle trait system interactions for soundness. Sure, this fixes the issue at hand, but I am in no way convinced that this is the only issue. After this fix I am about as sure that this is sound than before @comex created this post, and that is clearly "not sure enough". In contrast, properly localizing the assumption Pin::as_mut/Pin::as_ref are making (by locally being able to say "the trait impl we are calling is well-behaved") side-steps the indirection through the trait system. Now we can locally argue that the contract is satisfied.

We are making pin-specific assumptions, so IMO it only makes sense to also have a pin-specific marker trait.

Original Pin tried to be clever about re-using the Deref trait, and that failed. You are proposing to fix this by being more clever. I think we should take the lesson and not try to be as clever, instead relying on assumptions that are much easier to check and get right.


I'm more interested in enabling users to design APIs like Pin that are abstract over Deref/DerefMut than in fixing specifically the Pin API for extremely contrived soundness holes like this. Users should be able to assume that reference types consistently implement the traits that they expect them to.

I'm especially unpursuaded by an abstract and ill-motivated conservatism about unknown unknowns. The number of contrived language feature interactions leading to soundness bugs asymptotes toward zero rapidly enough, and are so consistently unrealistic to encounter by mistake, that we can handle them manually. I don't mean in terms of the pin API, but the language in general. I'm not very afraid of most of what's in the Unsound label.

That is, rather than designing our abstraction with extreme conservatism around pathological feature interactions, we should fix every pathological feature interaction we discover so that users can define abstractions in comfort.

That said, I feel confident you cannot exploit this particular interaction without a pointer type that implements CoerceUnsized. Since CoerceUnsized is unstable, we can design CoerceUnsized so that it enforces that reasonable behavior on its implementors (possibly with compiler checks, possibly by just making it unsafe), I'm not concerned about user-defined smart pointers. We just have to institute a rule that a type can't both implement CoerceUnsized and have the kind of exploit in their deref/derefmut/clone impl combination that this trick uses. (Formally expressing that is a prerequesite to either adding this requirement to CoerceUnsized or a marker trait.)

In the more immediate term, there are ways to exploit this on stable specifically by exploiting the rules around the #[fundamental] attribute and the fact that primitive references implement CoerceUnsized. We should fix that hole so that other users creating abstractions around Deref/DerefMut/Clone do not need to manage it.


This argument would be a lot more persuading if it wasn't made in a thread that is literally about not being careful/conservative enough in our analysis of how existing language features interact.

In that context, calling my arguments "ill-motivated" is rather out of place, I would think.

For user libraries to make any use of this, they would have to make extra assumptions about Deref/DerefMut that are not actually guaranteed by us. Unlike Pin, they cannot exploit the special status of libstd. That's shady at best. Worse, those user-created abstractions also have to work with any user-defined Deref, where there are basically no guarantees beyond tape safety. I am entirely unconvinced that there are useful sound abstractions that users could build here, without them introducing their own marker trait.

To be clear, I am all in favor of finding a more general marker trait like DerefPure that might be more general-purpose than just for pinning. However, given that pinning alters the fundamental idea of what a Rust type is, I find it not at all surprising that it requires some very specific guarantees. In our formal model of pinning, the main extra assumption for Deref/DerefMut roughly translates to "for arguments in the pinned typestate, the result is also in the pinned typestate". This is exactly what @comex' counterexample breaks; get_evil_reference returns an unpinned &'a mut (dyn Dubious<'a> + 'a) even from a pinned &self.

In other words, the extra guarantee that we want for Deref is extremely pinning-specific, and (semantically speaking) pinning is a language-wide change, so it is only natural that you would have dedicated traits to talk about this new guarantee.

(I am aware that in terms of just the implementation, Pinning is a library-only feature. But in terms of soundness/correctness, it is not. Pinning adds a new "typestate" to Rust, so that every Rust type now has four states (every combination of owned/shared and pinned/unpinned). Accounting for that in our formal soundness proof in RustBelt required a global refactoring; every soundness proof of every type now has to somehow account for these new "pinned" typestates. This is frequently fairly easy, but pinning has global impact nevertheless. This makes me even less convinced of the case for a user-defined library.)

I disagree, strongly and viscerally. This is the approach that C takes to its operational semantics ("specify something and clarify when someone asks questions"). It is the approach that too large parts of the industry take to security vulnerabilities ("fix them as we find them"). It is not an approach that leads to an actually sound system. This thread is the best example of that.

If there is any reasonable doubt that a proposed language or library feature is sound, we must fix that and find ways to increase our level of confidence. We owe that to all the people that rely on us having done our due diligence so that they can hack without fear. Fixing pathological feature interactions only as we discover them is the opposite of that, the opposite of "defining abstractions in comfort". It means we're always going to wonder if this is really sound, and the entire tower of abstractions built in libraries like Pin might crumble when someone finds a bad interaction. Not all of them are going to be locally fixable.

Of course there is no certainty, there is always going to be some level of doubt of the soundness. But I view this thread as strong evidence that we were a bit too comfortable to ignore the subtleties of the trait system. When a soundness issue like this is discovered, we should reflect upon how that could happen, and see if we can avoid making the same mistake again in the future. I see none of that in your response. It's not a shame to make a mistake, like we did with Pin. But it is a shame to go on as if nothing happened.


I think we should avoid discussing further these philosophical disagreements; already they are at a level beyond this issue and I think any response from me would go even loftier. Sorry for having broached them.

I want to focus on this question: Which part of the pin API contains the problem? We clearly have a difference of opinion here. Here are the APIs of Pin that are necessarily involved in this:

impl<P> Pin<P> where
    P: Deref,
    P::Target: Unpin,
    fn new(pointer: P) -> Pin<P>;

impl<P, U> CoerceUnsized<Pin<U>> for Pin<P> where
    P: CoerceUnsized<U>,
{ }

impl<P> Pin<P> where
    P: DerefMut
    fn as_mut(&mut self) -> Pin<&mut P::Target>;

impl<P: Clone> Clone for Pin<P> { }

My claim: the problem squarely falls in CoerceUnsized. That impl relies on all types that implement CoerceUnsized behaving in a way I will just call "sensibly."

What we did with the Pin API, in my view, was not in any sense changing fundamentally the way all pointer types behave (that might be true in the formalism, but I firmly insist it is not true in the Rust implemented in rustc). The point at which we made a restriction was that through the combination of Pin::new and the implementation of CoerceUnsized, we imposed a restriction on types which implement CoerceUnsized that they don't manufacture unsensible return values in their DerefMut and Clone impls through interior mutability tricks.

This is not to do with the correctness of trait system as a whole, or even of the deref/derefmut/clone traits, but with the specific coercion capabilities enabled by the magic CoerceUnsized trait. When we added that impl, I did reason, informally, about the behavior of that trait, and believed (wrongly, it turns out) that all stable impls of it behaved sensibly.

What I did not consider was that combining the magic CoerceUnsized language feature with the magic #[fundamental] language feature allows you to create references that do not behave "sensibly," making the coercion from one type to another invalid because it fundamentally allows you to construct a Pin<&mut T> to a non-!Unpin type T without upholding the necessary guarantees.

But this is not about the trait system as a whole, its about specific language loopholes, which are intentionally still in a semi-stable state, that interact in a really surprising way. My opinion is that we need to close those underlying loopholes.

EDIT: Just to add that I believe that the pin API is the only way to observe unsoundness through this CoerceUnsized/#[fundamental] trick, which is why I say adding the pin API imposed this restriction on CoerceUnsized. However, I don't feel completely confident that there isn't another way to observe unsoundness through that interaction, and I feel it is more probable that there's another way to achieve unsoundness without pin as it is that there is a way to achieve unsoundness without CoerceUnsized.

EDIT2: Further, I want to remind everyone that the correctness of Pin doesn't actually rely on all deref/derefmut/clone impls having any particular behavior (that would be completely unsound). It relies on the fact that there's no safe, generic way to construct a pin of a pointer to a !Unpin target, only specific constructors for pointers we know are of sound behavior. The problem here is that through a bad CoerceUnsized impl and Pin::new, you can do that in safe code.


@comex showed that Pin's CoerceUnsized was not the issue by doing the coercion before pinning.


This is true only when <P as Deref>::Target: Unpin holds. When the target is !Unpin, we are absolutely relying on Deref and DerefMut having a particular behavior (specifically, not moving out of the target). Normally, you must explicitly assert this via Pin::new_unchecked. However, as @comex showed, it's possible to get as_mut to do this for you, without ever relying on CoerceUnsized.


I think that would violate the requirement that the pinned type must never move again. For example, this example from the Pin docs page would still be unsound:

use std::mem;
use std::pin::Pin;

fn move_pinned_ref<T>(mut a: T, mut b: T) {
    unsafe {
        let p: Pin<&mut T> = Pin::new_unchecked(&mut a);
        // This should mean the pointee `a` can never move again.
    mem::swap(&mut a, &mut b);
    // The address of `a` changed to `b`'s stack slot, so `a` got moved even
    // though we have previously pinned it! We have violated the pinning API contract.

Nothing here depends on Deref or DerefMut.