Asynchronous Destructors

Notes on an API for async drop:


This mechanism seems very useful, and personally i see it another very good chance of outperforming C & C++ language

For the implementation part though, maybe i'm wrong, but currently in Rust not every type implements Drop, so ... by coupling with the Drop trait, i think either a change to how drop glue and Drop works is needed; or some extra checking of whether a type implemented Drop need to be done at compile time, which... kind of defeat the point of reusing the Drop trait itself...

Carl Lerche has proposed an alternative API:

trait Drop {
    fn drop(&mut self);
    fn poll_drop_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {

In this, we would call poll_drop_ready and then run the drop glue on the type, so the implementation of poll_drop_ready should not duplicate what happens in drop. These should be equally expressive, so the only question is comparing uses of the two in different scenarios and reaching an opinion about which is more convenient.


Great to see a drop method taking Pin<&mut Self>. :slight_smile: Too bad that we can't change drop's type. Though I wonder if there's something that can be done here to help safe pinning at the same time -- while we are touching Drop anyway.

So what exactly would the loop like like here, then?

Well "then run the drop glue" is wrong. In an async context, everywhere we would enter the Drop::drop method, we first while let Poll::Pending = self.poll_drop_ready(cx) { }.

This could reduce the amount of duplication between the two methods, though it also means that you have to make poll_drop_ready put the type into the correct state if you don't want some synchronous call in the normal destructor to run.

I'm not entirely sure I would call even this addition small, but I guess it's about perception.

I have a question about this approach, though, or maybe a small concern.

Let's say I have a future1 and future2 and select over them. Future1 finishes first and I'm no longer interested in future2, so I drop it. If future2 has an async destructor that takes long to run, I'm blocking the bigger (composed) future waiting for the destructor instead of progressing. It would be IMO better to somehow do the destructor in the background than stop the progress of the bigger future, and I can probably spawn dropping it as a separate task... but that'll be not the most intuitive code and people would forget about that.

And another one. Let's say I have a struct containing two fields with async-destructor. I suppose the compiler would generate the async destructor for this field too. In what order does all this happen? Does it first fully async-drop the first field, then drop it, then async-drop the second, or sequentially async-drop everything and drop everything, or does it allow parallel waiting for the destructors (eg. select over the async destructors)? I guess this could have consequences for some code.

Also, while having to write async destructor as poll function might be cubersome, I'd expect manually written async destructors to be needed for „leaf“ futures (eg. IO primitives) that already contain some amount of manual polling, registering wakeups, etc. And they are not going to be written in everyday coding. Do you expect the need for them to be comfortable to write to arise?


So the full desugaring would be this?

while let Poll::Pending = self.poll_drop_ready(cx) { }
1 Like

This is true whether the destructor is async or not. The fact that the destructor can't be async today means that if you need to guarantee that some IO happens, you need to make it blocking IO today, blocking the entire thread, not only this particular task.

Ralf's post is what I was thinking, but I think it would be worth exploring generating the equivalent of a futures join on all of the fields of a struct, rather than running them sequentially. Users who want them to be run sequentially for some reason should be using ManuallyDrop anyway.

You have a point there, I guess blocking the task only is better situation.

Nevertheless, if the async destructors try to deal with it, I'd expect them to walk the whole way. I just wanted to throw it in in case someone has an idea how to solve it naturally, or at least sketch some idea how to solve it in the future. I believe it might be good to mention it in any possible future RFC as either explicitly out of scope, or proven impossible if not providing a solution.

I don't think this is a productive way to frame the question. What you're proposing is spawning new tasks to run the destructors, implicitly. Setting aside the big barriers to implementing this (we'd need a global spawn API in the first place), spawning tasks carries its own costs, and it's a part of Rust's async/await model that new tasks are never spawned implicitly. Your comments are equivalent to people being surprised by the fact that futures "don't start running" until you await them: a fair perspective, but this is the model we've chosen.

If users do want things to be passed to a different task, that looks something like this:


(the name of the spawn function varies depending on what runtime you're using, of course)

That also wouldn't work for any non-'static values.

1 Like

Sorry if I make it sound negatively, or too pushing, or something. I just want to raise the concern.

I don't actually mean to say I'd prefer the async destructor to be automatically spawned. I think that would be a bad idea.

My question (or concern) was more in the lines if this can be solved somehow else. Could, maybe, the compiler embed the waiting for the destructor into the bigger future instead of spawning it independently? Or that's probably a bad idea either, because then the destructor runs at a different place than expected… maybe it's really the best solution to just block the task then and there. Still, I believe it's valid to ask if this can be solved in a way that doesn't surprise people, even if the answer to the question is „No, we'll document it“.


In order to drop nested structures from the outside in, and to sequence or join the destruction of sibling fields, the drop glue would need to keep some state to know which poll_drop to call next. When the type is known, these steps could become states of the state machine generated by the async context in which the object is dropped. But how would this work when dropping trait objects? There could be a poll_drop in the vtable, but how would that function, when called repeatedly, know where it left off in calling poll_drop on the object's fields?

1 Like

How about the problem of memory unsafety if the user calls mem::forget on the I/O future? (i.e. lifetime ends but the I/O operation will still access the buffer via a reference with that expired lifetime)

The ability to forget something is intrinsic to Rust at this point. If memory safety requires destructors, the two options are pinning and scopes.

Pin requires the destructor of a place to be run before invalidating it IIRC. Scopes like crossbeam::scope allow loaning of the space such that it cannot be forgotten in user code. for reference.

This is a good question. I can think of two solutions:

  • We add a secret field to structs containing multiple fields with poll drop glue. This is similar to how we used to handle dynamic drop, and we've moved away from it.
  • We use the poll_drop_ready approach, and expect users to implement a "fused" poll_drop_ready that will eventually return Ready(()) repeatedly. Then, the struct can be stateless and just call poll_drop_ready on every field until they all return ready. Once they do, we call the destructors once.

I'm inclined toward the latter approach, personally, as it doesn't insert secret fields into users' types.

1 Like

Other users have basically addressed this but its a good opportunity to clarify what we can actually guarantee around destructors. I've looked a lot at the completion/cancellation issue that Matthias had brought up previously, and I am confident it can be made safe without onerous runtime overhead.

First, as other users have said, we can't guarantee that destructors will run in general. However, we can make more specific guarantees, which are sufficient. For example:

  1. Something which takes ownership of a value can guarantee, through its API, that it will not give ownership of that value up until it enters a certain state. For example, a future can take ownership of a buffer and not give the buffer back until the IO is complete.
  2. Something which is pinned guarantees that its destructor will run before its memory is invalidated. So if you do not begin the IO event until the first time the future is polled and you make the destructor wait on the IO completing, you have a guarantee that the future will not be moved until the IO is completed.

However, it's worth adding a niggle about async destructors: we cannot guarantee, even with pinning, that the async destructor component will run. This is how easy it is to drop something without running its async destructor:

async fn foo(val: AsyncDropType) {
    // drop is a non-async fn, it never yields, it does not run
    // async destructors

That is, its always safe to move values from an async context to a non-async one.

Scope APIs can come in handy here, because they keep you from every giving up ownership of the value, so you can guarantee its destructor runs in an "async context."

Fortunately, async destructors are about making the destructor more efficient, not about safety. Even in the completion/cancellation case, the async destructor is an optimization (probably essential for performance) to avoid blocking this whole thread.

But this does mean that manually implemented combinators which drop the futures they contain will want to be rewritten to explicitly async-drop them. Those which don't will be nonperformant.

(I am pretty sure there is no way to get around this problem.)

But is this enough to support an async read_from_socket_into_buf(buf: &[u8]) API implemented with completion-based I/O, which would be the ideal solution?

It doesn't seem so, because that API doesn't take ownership or pin the buffer, and scope-based solutions also don't seem to work because, unlike in the sync case, the scope() creation function must be async, but then you can just mem::forget the future it returns and thus bypass the cleanup.

A possible solution could be to change the Rust language to support non-forgettable types and make mem::forget, Rc::new and Arc::new only work with forgettable types. The same can be done with non-sync-droppable types. Obviously for compatibility all current edition generics need to have implicit Drop + Forget bounds, so libstd and future combinator libraries will have to be changed to add ?Drop and ?Forget bounds where needed.

Then the naive API will work because the future returned will not be forgettable and not sync-droppable.