Asynchronous Destructors

I don't think this is a productive way to frame the question. What you're proposing is spawning new tasks to run the destructors, implicitly. Setting aside the big barriers to implementing this (we'd need a global spawn API in the first place), spawning tasks carries its own costs, and it's a part of Rust's async/await model that new tasks are never spawned implicitly. Your comments are equivalent to people being surprised by the fact that futures "don't start running" until you await them: a fair perspective, but this is the model we've chosen.

If users do want things to be passed to a different task, that looks something like this:


(the name of the spawn function varies depending on what runtime you're using, of course)

That also wouldn't work for any non-'static values.

1 Like

Sorry if I make it sound negatively, or too pushing, or something. I just want to raise the concern.

I don't actually mean to say I'd prefer the async destructor to be automatically spawned. I think that would be a bad idea.

My question (or concern) was more in the lines if this can be solved somehow else. Could, maybe, the compiler embed the waiting for the destructor into the bigger future instead of spawning it independently? Or that's probably a bad idea either, because then the destructor runs at a different place than expected… maybe it's really the best solution to just block the task then and there. Still, I believe it's valid to ask if this can be solved in a way that doesn't surprise people, even if the answer to the question is „No, we'll document it“.


In order to drop nested structures from the outside in, and to sequence or join the destruction of sibling fields, the drop glue would need to keep some state to know which poll_drop to call next. When the type is known, these steps could become states of the state machine generated by the async context in which the object is dropped. But how would this work when dropping trait objects? There could be a poll_drop in the vtable, but how would that function, when called repeatedly, know where it left off in calling poll_drop on the object's fields?

1 Like

How about the problem of memory unsafety if the user calls mem::forget on the I/O future? (i.e. lifetime ends but the I/O operation will still access the buffer via a reference with that expired lifetime)

The ability to forget something is intrinsic to Rust at this point. If memory safety requires destructors, the two options are pinning and scopes.

Pin requires the destructor of a place to be run before invalidating it IIRC. Scopes like crossbeam::scope allow loaning of the space such that it cannot be forgotten in user code. for reference.

This is a good question. I can think of two solutions:

  • We add a secret field to structs containing multiple fields with poll drop glue. This is similar to how we used to handle dynamic drop, and we've moved away from it.
  • We use the poll_drop_ready approach, and expect users to implement a "fused" poll_drop_ready that will eventually return Ready(()) repeatedly. Then, the struct can be stateless and just call poll_drop_ready on every field until they all return ready. Once they do, we call the destructors once.

I'm inclined toward the latter approach, personally, as it doesn't insert secret fields into users' types.

1 Like

Other users have basically addressed this but its a good opportunity to clarify what we can actually guarantee around destructors. I've looked a lot at the completion/cancellation issue that Matthias had brought up previously, and I am confident it can be made safe without onerous runtime overhead.

First, as other users have said, we can't guarantee that destructors will run in general. However, we can make more specific guarantees, which are sufficient. For example:

  1. Something which takes ownership of a value can guarantee, through its API, that it will not give ownership of that value up until it enters a certain state. For example, a future can take ownership of a buffer and not give the buffer back until the IO is complete.
  2. Something which is pinned guarantees that its destructor will run before its memory is invalidated. So if you do not begin the IO event until the first time the future is polled and you make the destructor wait on the IO completing, you have a guarantee that the future will not be moved until the IO is completed.

However, it's worth adding a niggle about async destructors: we cannot guarantee, even with pinning, that the async destructor component will run. This is how easy it is to drop something without running its async destructor:

async fn foo(val: AsyncDropType) {
    // drop is a non-async fn, it never yields, it does not run
    // async destructors

That is, its always safe to move values from an async context to a non-async one.

Scope APIs can come in handy here, because they keep you from every giving up ownership of the value, so you can guarantee its destructor runs in an "async context."

Fortunately, async destructors are about making the destructor more efficient, not about safety. Even in the completion/cancellation case, the async destructor is an optimization (probably essential for performance) to avoid blocking this whole thread.

But this does mean that manually implemented combinators which drop the futures they contain will want to be rewritten to explicitly async-drop them. Those which don't will be nonperformant.

(I am pretty sure there is no way to get around this problem.)

But is this enough to support an async read_from_socket_into_buf(buf: &[u8]) API implemented with completion-based I/O, which would be the ideal solution?

It doesn't seem so, because that API doesn't take ownership or pin the buffer, and scope-based solutions also don't seem to work because, unlike in the sync case, the scope() creation function must be async, but then you can just mem::forget the future it returns and thus bypass the cleanup.

A possible solution could be to change the Rust language to support non-forgettable types and make mem::forget, Rc::new and Arc::new only work with forgettable types. The same can be done with non-sync-droppable types. Obviously for compatibility all current edition generics need to have implicit Drop + Forget bounds, so libstd and future combinator libraries will have to be changed to add ?Drop and ?Forget bounds where needed.

Then the naive API will work because the future returned will not be forgettable and not sync-droppable.

No, I do not believe that API could possibly be made sound (and I don't think its a priority to make it sound regardless).

(Discussion on making unleakable types is off topic for a thread about async destructors, but I'd direct you to this post about our decision not to add this sort of auto trait in 2015).

While reading the post, this was mentioned a couple of times:

This helps avoid the problem of types that don’t properly implement Drop when they implement AsyncDrop

I didn't see the option of making Drop a super-trait of AsyncDrop discussed, is there a fundamental reason for this? I'm not sure if the AsyncDrop trait proposed at the beginning adds any extra value over the final solution. It would allow T: AsyncDrop bounds, but I don't know if that would be useful (Drop trait bounds are weird).

Drop trait bounds are not only weird, they're not allowed. (As a reminder, users are also not allowed to call the drop method directly). The async destructor would have the same conditions, and so AsyncDrop: Drop would just seem to me like a less convenient way of adding an async drop method to the Drop trait.

1 Like

My thinking here is that such an API would be ideal because it mirrors the sync API and has a zero-cost implementation and that making major changes like the one proposed here without a proven plan to support the ideal API risks going through the pain of the change without getting the ideal benefit.

As for the statement that it "could not possibly be made sound", doesn't making that future unforgettable and undroppable (except for async drop) make it sound? (obviously with some design drawbacks)

It also seems to me that this is on-topic also because types that need to be async dropped should not be sync-droppable and non-sync-droppability and unforgettability are similar concepts (not blocking in drop is not just an optimization, it is necessary for a properly functioning program that is not susceptible to either hanging indefinitely or creating an unbounded by a constant number of blocked OS threads and running out of RAM).

EDIT: Also something that is interesting to note is that only non-'static Futures would have to be non-sync-droppable and non-forgettable (because 'static Futures can be dropped by spawning async-drop on the executor), so the "split" between droppable/forgettable and non-droppable/forgettable types should essentially be limited to future combinators, since most other code would either not use a Future or use a 'static Future.

1 Like

Remember: one a future has been polled once, it's been pinned. If it is !Unpin, it is unsound to forget a future once it's been started.

Due to this, I think (though don't have the proof) that a completion-based future that doesn't own the buffer can actually be made sound.

The basic API, super sketch:

struct CompletionFuture<'a> {
    state: State,
    buffer: &'a mut [u8],
    pinned: PhantomPinned,

impl Drop for CompletionFuture {
    fn drop(&mut self) {
        let this = self;

    fn drop_poll(Pin<&mut self>) {
        match self.state {
            State::AwaitingCancellation => self.check_cancellation(),
            State::Cancelled | State::Done => Ready(()),
            State::New => Ready(()),
            State::AwaitingOp => self.request_cancellation(),

impl Future for CompletionFuture {
    fn poll(Pin<&mut self>) {
        match self.state {
            State::New => self.request_operation(),
            State::AwaitingOp => self.check_operation(),
            State::AwaitingCancellation => self.check_cancellation(),
            State::Cancelled | State::Done => Ready(()),

Leaking the future after it's been started would require the loan of the space to last forever, IIUC. Also IIUC, just boxing the future then forgetting it is also unsound even though the future's location itself isn't reclaimed, because the value at that spot is invalid once the lifetime 'a expires.

This is definitely not true, and it sounds like its probably based on misapplying the word "invalidate" as its been defined by some of the UCG work to the way its being used in reference to pin.

You can mem::forget a Pin<Box<T>> even if T does not implement Unpin. Its completely safe and we can't assume it won't happen. All thats protected is the actual memory representation of T, which cannot be overwritten unless the destructor runs.

However, this is still off topic for this thread. Async destructors have applications outside of the completion/cancellation problem, and there are other threads on internals for discussing ways to solve the completion/cancellation problem.

On one hand I like the idea of continuing not to insert secret fields into users' types. On the other hand, these secret fields would be exactly equivalent to the current on-stack drop flags, and I like the idea of consistency with sync code.

This suggests a third approach: Depending on how frequently we expect people to use hand-written futures as trait objects (rather than directly from an async fn), we could just expect such types to implement their own drop flags for poll_drop when necessary.

This is more onerous than implementing a fused poll_drop_ready, but if that case is rare enough then we get to keep poll_drop and most of the time people won't need to write anything.

(Or, alternatively, a fourth approach: make "fused" a requirement of all poll_drop_readys, and generate them as drop glue.)

1 Like

Not sure how this could work, I'm not sure you've understood the problem in the same way I have. Consider that Bar and Baz both have async dtors:

struct Foo {
   a: Bar,
   b: Baz,

let x: Box<dyn Any> = Box::new(my_foo);

We need to implement the drop glue for Foo, not for Baz and Bar. Baz and Bar already will have their own "drop flags" in essence as part of implementing poll_drop_ready, but if we want to guarantee we won't call poll_drop_ready after it returns Ready, we'd need additional state in structs like Foo.

Not sure how this is different from the second approach I listed.

Right, I'm suggesting that in this example we just wouldn't async-drop Foo. This is "okay" because async drop is purely an optimization.

If the author of Foo did want to be async-dropped when used like this, they would need to add extra state to Foo to track which of a and b they've dropped.

But presumably Foo would more often be used as a local to an async fn, where those drop flags could be part of that future.

It's the same, but automated by the compiler. If that's what you meant then it's identical. :slight_smile:

To be fair, you explicitly brought up the completion/cancellation problem in the blog post as a "really interesting low level use case" for async destructors.

But whether it's worth continued discussion in this thread really depends on whether, as @bill_myers suggests, a solution to that problem can be had which would also change the design of the asynchronous destructors feature – apparently by having the type system make it impossible to sync-drop an object that wants to be async-dropped. I'm fairly skeptical that that could actually work, though.

It seems to me that the time to fix the problem would have been before Pin was stabilized. Stack-allocated pins have the property that they cannot be forgotten, thus lifetimes in them cannot expire before the pinned object is dropped.* If we had somehow enforced the same requirement for Box::pin and other non-stack pin functions, perhaps by having them require T: 'static (at least until a more comprehensive solution could be devised), then "lifetimes cannot expire" could have been part of the Pin guarantee. However, it doesn't seem like anybody realized this guarantee would be useful until Pin was already stabilized.

Personally, I still think that although Pin was an elegant design within its constraints of not requiring compiler changes, it ought to eventually be deprecated in its entirety in favor of some kind of !Move-like solution once the compiler has had time to catch up. Unfortunately, it seems increasingly difficult to do such a thing backwards compatibly, which means it probably won't happen at all.** If such a transition did happen, though, it could theoretically provide an opportunity to revisit and extend the guarantee.

* Except for "stack" pins within async fns. They can currently be forgotten if the Future representing the async fn is itself forgotten, but if the latter were impossible, the former would be too.

** I wish we had limited Pin to specific pointer types like &T, &mut T, Box<T>, etc., rather than any P: Deref. If we had, it would be possible to eventually turn e.g. Pin<&mut T> into an alias for &mut Pinned<T>, for some Pinned type that would be !Move and thus not subject to mem::swap.