Async Drop via binding

Summary

Provide an asynchronous-enabled Drop implementation that functions by explicitly marking the let-bindings of variables that it affects.

Motivation

Some tasks started within coroutines manage resources in ways similar to synchronous types, that is they are expected to release some of their claims. Currently, they must be able to do so synchronously via Drop or be documented to require an explicit asynchronous release on scope exit.

Guide-level explanation

We introduce an async version of the Drop trait, AsyncDrop, is introduced (via HRTB or a poll-method, discussions for the exact form of this trait are somewhat bikeshedding). The trait is not automatic and structural in the same manner as Drop is, except for the negative rule for Copy. The proposed form is:

trait AsyncDrop {
    fn poll(_: Pin<&mut Self>, ctx: &mut Context) -> Poll<()>;
}

We add a new binding mode for variable declarations (such as let) which can only be used in an async blocks such as async functions.

async fn example(http: &HttpSocket) {
    // Makes sure to properly finish the request, usually. Failing to do so
    // would corrupt the shared stream state, a potential security issue if
    // this socket is shared between parties (reverse proxy).
    let async req = good_http.get("/example").await;

    req
        .body_stream()
        .for_each(|fragment| {
          /* Some work. If this panics, it would not cleanup `req` with `let`.
           * `let async` fixes it and allows depleting the body.
           * This is necessary for proper framing of responses (in HTTP/1.1).
           */
	    })
        .await;
}

Types bound in this binding mode are treated special when the variables would exit scope (and thus be dropped). They are instead pinned (only virtually, they are no longer movable here) and then the type's Drop coroutine is await before the usually invoked Drop.

Existing Drop semantics are not modified. There is no effect when the value is not bound to a variable with the new binding mode. There are no effects when async fn itself is Drop before being polled to completion.

Instead, the types of coroutines naturally generate the appropriate AsyncDrop implementation which does the correct cleanup of all exiting variables with their own AsyncDrop coroutines.

We don't aim to make it impossible to forget the release of resources, just syntactically easy to do so. Compare this to Drop which doesn't make it impossible to leak but just makes is structurally simple to write code that does not.

Reference-level explanation

A new modifier for the identifier pattern (async) is added. The pattern is an alternative to the ref qualifier in identifier patterns. It can only be used in the body of a coroutine (async fn). As a result, the resumption method of the coroutine is modified to insert the await expansion of AsyncDrop::drop at those locations determined by the drop-scope and its unwinding edges. These polls will be called with the context passed to the coroutine resumption (derived from its resumption argument). The Drop glue of the coroutine is not modified.

There are no further semantics attached to the variable, otherwise. It can be moved from in the usual sense. Consequently, the asynchronous drop then depends on the qualifier of the target place.

Drawbacks

This complicates the syntax and semantics of let. Depending on the open questions, it may also complicate the analysis of temporaries.

Rationale and alternatives

Other designs explored modifying Drop itself. A fundamental problem is the missing access to any task context. Additionally, coroutines defer control to an executor which is not easily possible from within a synchronous function (they don't compose).

For instance: https://github.com/rust-lang/rfcs/pull/2958

The difference to this is:

  • Semantics introduced here are explicit. This addresses subtle compatibility issues.
  • Only async fn is changed and no obligations are put on other contexts.
  • Fields are only recursed when implemented by the type.

The above RFC also argues that Pin<&mut Self> is not possible, I'd like to argue the opposite. Pin itself provides an ordering between the call to Drop::drop and the invalidation of the memory location that was observed to be pinned. Not more! None of the additional calls modify this ordering. If a type decides to utilize the &mut Self it is given in its own Drop then it must not rely on pinning. In particular, if it moves some fields within its destructor then it must not implement pin projection. This statement is true irregardless of AsyncDrop.

Another alternative is some handle to the executor that spawns a new cleanup which is not awaited. This has the drawback of an extra field, some design problems of making the handle generic, and that cleanup can not borrow any of the existing resources (it must instead take ownership of them).

This design ensures that semantics are only modified for code that has access to such a context. As a scope-based mechanism it also naturally composes much better with advanced lifetime uses.

Prior art

(Edited 2024 to add C# and Typescript, which I wasn't aware of initially).

C#

There's a separate interface type IAsyncDisposable which can be combined with a scoped statement await using to asynchronously clean up a resource at the end of a scope. This works directly analogous to using syntax that implements what Rust would describe as RAII (as opposed to garbage collection). The DisposeAsync method of the interface returns (and possible constructs) a task to be driven to completion when the object is scheduled to be disposed.

One interesting note: The language designers recognize that 'leaving scopes' is an entirely sequential operation and the original tasked is blocked until the single running dropping is complete. But on some schedulers space is limited and you'll want to queue the task in another context instead, so it does not consume additional resource. I don't think the above design has this problem as its unwinding should occur as part of the normal state machine translation. However, this has the downside that all the context which is required for the asynchronous disposal must already be part of the object bound by let async. These together might consume surprising amounts of space for the surrounding async fn making use of such a binding.

Typscript

In 5.2 it introduced using and await using, establishing an interface for deterministic disposal in both synchronous and asynchronous contexts. These statements are utilized by creating variable bindings with the respective statement. The behavior is pretty similar to the above proposed one, except their 'panic path' does not need to be defined quite as in-depth.

Python

In Python, async with ctx as v: is a block statement that uses enters the asynchrounous context manager ctx (with v being an object that is the result of entering) with an asynchrounous operation (awaiting ctx.__aenter__()) . When code steps out of the block statement with or without an exception it will exit the context manager with another asynchrounous operation (awaiting ctx.__aexit__(*exc_state)).

However, the devil is in the details. Especially since the task context is not declarative and effects are not special, the exact semantics of this can be rather surprising.

# <https://docs.aiohttp.org/en/stable/>
    async with session.get('http://python.org') as response:
        html = await response.text()

Note that Python implements coroutine as pure generators. Internally, it relies on exceptions as well as an implicit task context to communicate with the task context. When a coroutine is finalized (i.e. 'garbage collected') it unwinds by injecting raising of a GeneratorExit exception in its generator.

Consequently, when a coroutine as above is finalized (i.e. Drop) it will call the __aexit__ code with a special exception state corresponding to GeneratorExit. The special cleanup can thus run some code which will run as expected if it is entirely sequential. Importantly, the code may even catch the special finalizing exception.

class AsyncTestContext:
    def __del__(self):
        v = getattr(self, 'entered', None)
        print(f'Currently entered: {v}')
    async def __aenter__(self):
        self.entered = True
    async def __aexit__(self, *args):
        # Pretend to do work, yield once.
        await asyncio.sleep(0.0)
        self.entered = False

This is also the source of a potential bug: The event loop (executor) may be gone or no longer accept the submission of further coroutines when finalization is invoked. Indeed, while __aexit__ is declaratively a coroutine, if it tries to use its context it may fail and throw other exceptions instead of 'unwinding'. Exiting with a exception that is not GeneratorExit is vocally ignored by the generator (it will print a warning and stacktrace) but for surrounding code, no longer aware of the true cause, the situation can quickly escalate. They may retry (an infinite loop of exceptional behavior in a finalizer!) and generally encounter many unforeseen code paths.

Some Python code to play around with
import asyncio


class Test:
    def __del__(self):
        v = getattr(self, 'entered', None)
        print(f'Currently entered: {v}')
    async def __aenter__(self):
        self.entered = True
    async def __aexit__(self, *args):
        await asyncio.sleep(1.0)
        self.entered = False


async def acount(*args):
    import itertools
    for i in itertools.count(*args):
        yield i


async def leaky(syncer):
    #async for _ in acount():
        async with Test():
            syncer.set_result(True)
            # Yield before exiting!
            await asyncio.sleep(1.0)
            # _usually_ unwinds before here is reached
            assert False


def main():
    # A local event loop to ensure GC
    l = asyncio.new_event_loop()
    f = l.create_future()
    l.create_task(leaky(f))
    l.run_until_complete(f)
    del f,l
    print('complete')
    # leaky(f) is still running

main()

Unresolved questions

  • The location of temporaries in async fn could be specified as also using being qualified as async. With the proposed design this needs to be decided before stabilizing the AsyncDrop trait.

  • The exact AsyncDrop trait needs to be defined. Allowing it to construct an arbitrary (HRTB) custom future type could be useful for sharing finalizers of different objects.

  • Will and should it generalize to other kinds of effects (e.g. control flow)? Some form of Drop where the destructor returns a Result and which ?-tries on this method would come to mind as a sort-of-natural extension of the concept.

Future possibilities

This design could also motivate further bindings for other effects. It should be carefully monitored that this does not create complexity in understanding the semantics (such as sequencing) of potential other combinations.

3 Likes

(NOT A CONTRIBUTION)

I'm excited to see another async drop proposal! I'd love to see more movement on this feature, I think its one of the biggest holes in the async MVP and the one that has received the least love.

My comments follow.


These are the two points in tension, between which there is a necessary trade off: in RFC 2958 I proposed an async drop that does recurse into a type to properly async drop any fields of something being async dropped. You propose one that does not. Because Drop::drop does not pass a pinned argument, there would be an interleaving of pinned and unpinned references to those fields if drop glue recurses. Naturally, if you do not do that you can pin in the async drop.

But that is a huge loss! First of all, if I wrap a type that implements AsyncDrop in a wrapper, I have to manually implement AsyncDrop or I lose that type's AsyncDrop semantics. This is a huge footgun by itself. But it gets worse: if a type later on adds an AsyncDrop implementation, downstream users don't know, and will not get that behaviour.

If a type is generic, it will need to conditionally implement that recursion into fields with a conditional implementation. Conditional impls aren't permitted with Drop, I think this RFC proposes to permit them for AsyncDrop. However, then if I have a generic type, unless specialisation stabilises I cannot implement my own AsyncDrop for Self<T> and also AsyncDrop for Self<T: AsyncDrop>.

Personally, I think making people manually implement AsyncDrop would really hurt its utility and I think RFC 2958 has the right approach to making drop glue work.


These two statements are not different from RFC 2958:

RFC 2958 only proposed inserting calls to poll_drop_ready inside of async contexts and does provide the task context to this call. I don't know if someone else has written a different proposal where these are different, but I think it would be a mistake. We're completely in consensus on this.


I think the main novelty in this RFC, as you allude to in the title, is the introduction of an async binding to indicate when a type will be async dropped; without those bindings the async destructor won't run.

There are two advantages I see regarding this:

  • It introduces a syntactic notice that there will be a yield point in the destructor; being able to enumerate all the yield points of a function is listed as one of the advantages of async/await, so it makes sense that people would like to have this. However, the let async will be nowhere near where the yield point is, and users (or their tooling) would need to figure that out..
  • The RFC doesn't say this, but I would presume you can't pass ownership of a let async binding to a non-async function. This means you know the async destructor will be run. This would be really nice to have! You could also maybe make let async properly pin the binding, which would also be a nice feature to have.

The main disadvantage is:

  • If users forget to let async something with an async destructor, they don't get async drop functionality. Clippy could probably help with this.

Overall, I think this is a really interesting point in the design space to consider. I think especially for the second advantage, it seems possibly worthwhile - and because of the second point, I think its likely users will learn to remember to let async things, mitigating the disadvantage. I'm not completely sold, but I like that this option has been brought to the table.


This RFC doesn't seem to address the question of the relationship between async drop and panics. To me this is the biggest open design work to be done to solve async destructors.

5 Likes

To add to the prior art, that's similar to how C# does it too.

My guess is it uses await instead of async because it's adding a suspend point (later), not creating a thunk.

6 Likes

The design in this proposal is significantly more brittle than Drop. Ordinary Drop is composable: all fields are dropped, all locals are dropped, and you really need to go out of your way to leak resources (like using some refcounting, leaks or similar apis). In most code leaks are entirely a non-issue.

With the given proposal, AsyncDrop isn't composable. Putting an AsyncDrop type into a field doesn't mean it will be async-dropped (for private, i.e. not visible in current scope, fields this seems to be entirely impossible, since you can't make a local binding to them). Passing AsyncDrop types into or out of the functions won't invoke AsyncDrop, unless an async binding is explicitly used. Since you write

I assume that something like

let async x = foo();
let y = x;

also compiles without any issues - and entirely silently discards the AsyncDrop glue. It is unacceptable that simple variable bindings can leak a variable.

My thoughts are that structural composability could be a merely medium sized issue, similar to pin-projection. Indeed, how often does one even write a Future with fields of indvidual futures? And how often is the asynchronous wait on these fields a correctness concern? The hypothesis is that the prominent examples mind pretty much boils down to asnyc {}, which does so implicitly and can be implemented correctly. The concern is then that the mechanism provides enough benefit in this common case. And as mentioned by boats, clippy could help with the rest. (Note: though the RFC should expand on, how exactly one would implement this if one wants to emulate a generator. Some intrinsic similar to drop_in_place is definitely missing here).

Would you have an example of such a type? Feel free to contradict my assumption of 'mostly coroutines' here.

It's not clear if a total guarantee is a requirement in practice. We could gather this as a data point though, compare the use of async fn with the amount of custom Future implementations. Truly, I want to make the argument that this is comparable to linear types vs. Drop: there is no guarantee of dropping with a simple let; yet the practice seems to validate this as a good tool regardless. That's the kind of tool that let async is supposed to provide as well.

Yes, this is accepted. Though if x were pinned it would not allow it. How is different from allowing mem::forget or moving a value? let async is about providing a tool to have the compiler write the correct coroutine generator, not so much about also guaranteeing type state. Speaking of which, the treatment of temporaries would decide how the std::pin::pin_mut!() macro behaves (a temporary is used internally) and it should definitely allow controlling the manner in which the value is dropped. I'll put it to the open questions and towards the arguments of temporaries but it doesn't appear unsolvable.

To lay out more thoughts on the relation to Drop and linear typing, consider that it assumes that finalizing of a value takes no further context and also returns no further context. There's several cases where this, too, is lacking. std::thread::scope might be the canonical example regarding linearity. An encoder might want to flush when going out of scope, returning an error information. A special allocated buffer could take an explicit arena handle to recycle the memory allocation in a thread local cache. It's possible to implement such types regardless by providing an explicit finalizer method with the correct semantics. One from memory: png::Writer::finish; there surely are others. The compromise common to these is that Drop preserves soundness, and other finalizers can exist that preserve additional properties on top.

Putting it this way, let async does not aim to provide mechanisms where waiting is a necessary property for soundness. It still relies on Drop for the basic guarantee and does aim to go beyond regarding linearity (so not at all). The conjecture is that this is enough to be useful to simplify the task of writing good asynchronous code.

Note that the synchronous non-Drop finalizers appear very similar in structure to the named deficiencies of not guaranteeing a context in dropping an AsyncDrop type. Yet, while they are a problem they are not insurmountable and definitely not unacceptable in the language, the current Drop is acceptable anyways. I tend to say that considering existing code as good evidence to the contrary, Drop is useful regardless of non-linearity, the burden of proof lies our your side.

Nevertheless, as briefly touched on in 'Future possibilities', there is definitely room to provide better finalizers (with effects such as Result; or an explicit different unwind finalizer) in the language overall. Those problems, in particular as far as they touch on soundness concerns, are out-of-scope of the RFC though.

The main contribution, regardless of accepting the concrete mechanism, is considering if modifiers on binding patterns could be used to express any of this.

(NOT A CONTRIBUTION)

This seems like a really different understanding of how async drop will be used from mine. Futures themselves will not implement async drop, but types like async IO handles or async IO operation buffers. Users may embed these types (and likely even generically) within their types, often end users in a manner specific to their program because they're attempting to create some abstraction around some business logic, and I do not believe it would be viable to expect these users to remember to implement AsyncDrop every time they do this.

Most importantly I don't think you get enough value from giving this up. It really sucks that Drop doesn't take a pinned reference, but you can, by following the correct discipline in your drop method, construct a pinned reference in a poll_drop_ready method if you need it. Probably macros similar to pin project will enable this automatically. This is worse than just receiving a pinned reference in the signature of course, but being able to rely on async destructors of fields running automatically is way more valuable in my opinion.

4 Likes

Hm, that's true. The transitive ramifications could be quite a burden for everyone creating abstractions. Another potential idea for the drop(&mut self) dilema would be to have AsyncDrop replace the usual drop for let async variables. It could then again be structural. That would again be sound because the fields are no longer accessed after being pinned. Of course, this still won't fully solve how to run and await that finalization on panic unwinding. And the compatibiltiy with Drop must be partially preserved.

It'll take some time to figure out the details and problems of the having two completely separate ways of finalizing values.

(NOT A CONTRIBUTION)

If you do this, every type with a Drop implementation needs to add an AsyncDrop implementation, so that they get dropped properly in an async context, otherwise their destructors won't run.

(This is true regardless for some cases, like data structures which call the destructors of their elements manually because Rust can't auto-generate the structural behavior. They'll all need to implement async destructors as well, unfortunately, no matter how it works.)

IMHO it will be beneficial if async drop can be enforced, i.e. it is always guaranteed to be called and any type that contains AsyncDrop types would also implement AsyncDrop.

An example will be io-uring.

Under io-uring, you have true async read/write I/O, instead of just non-blocking I/O.

If you issue a read/write op that borrows/aliases a buffer, then simply dropping that future would leave a dangling reference in the io-uring instance, which will cause use-after-free.

The io-uring operation can be cancelled asynchronously, but then you still have to wait for the cancellation to be confirmed by the io-uring instance.

While wg-async and other people see that using owned buffer with io-uring is the better solution here, since it avoids the lifetime problem while also improving the efficiency since uring works best with owned buffer, IMO supporting this use case in async drop is still beneficial.

Almost all existing code is written assuming that read/write takes a reference, which means using the owned buffer would incur an additional copy.

Also, there might still be cases that passing a reference is still beneficial, i.e. you know the size of the buffer required for the entire file AOT and it is very large (GB level) so you'd like to avoid copies.

If this solution is feasible, then an AsyncDrop Trait needs to be implemented at each location where the Drop is Impl'd, and each statement, expression, or closure should be considered to implement the AsyncDrop Trait by default when receiving an await or forming an await.

I have a strong curiosity about this, if I want to start helping to implement this RFC, where do I need to start? Follow the lifecycle trait?

This problem, as real as it is, doesn't need to be solved by the trait/binding directly. Consider how a similar problem is solved in std::thread::scope / rayon::scope that have leak soundness requirements with regards to lifetime; not by Drop. There, a closure parameter is invoked from a scope that will Drop everything properly before returning.

The analogue from an async perspective might be a method of the executor that promises to drive an AsyncDrop future's finalizer to completion before returning from the method (even in the unwinding path).

executor.run_until_complete_and_async_drop(async {
    let async _op = unsafe { io_uring.submit(op) };
})

I believe this approach can be made composable by first establishing a modifable task that is promised to be completed, and adding work into this task instead of scheduling a separate task. (I.e. have the Task act a bit like the Scope type from rayon). Is it possible to address the soundness puzzles of async code one-by-one in this way?

2 Likes

It still forces users to use unsafe though, since there is no way of conveying the poll-until-async-drop-is-done kind of information.

I think that if AsyncDrop is considered as mandatory and guarantees that it's always called, then we can submit read/write with reference to data in safe code.

This would also make implementing Drop easier since it knows that AsyncDrop must be polled to end, thus any pending async operation is done.

After reading a lot of Sabrina Jewson's and withoutboats' blogs on the issue I think that what we really need isn't async drop, just async cancel for futures. I invision a trait like this:

trait UnsafeFuture {
    type Output; 
    /// # Panics
    /// it may panic if called after completion or cancellation.
    /// 
    /// # Safety
    /// The caller must `poll()` to completion before drop 
    /// or `poll_cancel()` to completion before drop.
    unsafe fn poll(self: Pin<&mut Self>, cx: Context<'_>) -> Poll<Self::Output>; 
    /// # Panics
    /// it may panic if called before `poll()` or called after completion.
    ///
    /// # Safety
    /// The caller must `poll_cancel()` to completion before drop.
    unsafe fn poll_cancel(self: Pin<&mut Self>, cx: Context<'_>) -> Poll<()> {
         self.poll(cx).map(|_| ())
    }
}
impl<F: Future> UnsafeFuture for F { /* ... */ }

Marking future as unsafe like this would allow runtimes like monoio to accept borrows instead of owned buffers. My reasoning is that given that it can not possibly be made sound, if we want to support this at all, it must be unsafe.

In this proposal, async fns and async blocks would implement both Future and UnsafeFuture by default, unless one UnsafeFuture is awaited that doesn't also implement Future. This would be analogous to how Send works with futures and closures.

For the other uses of async drop, such as closing files asynchronously, or closing networking handshakes, I'd argue that the best approach is just to make the types with must_use. It may not be as elegant, but it is as good as it gets.

2 Likes

You could regard the binding as an explicit mechanism to mark what should be cancelled within a coroutine. It's specific to use within coroutines for similar reason. Agree with the other points, though I wonder if executor cooperation could provide an ecapsulation for unsafe. To expand on how, let me expand on the previous example to make it more concretely resemble std::thread::scope:

/// 'Librarified', encapsulated unsafe. Should be feasible.
pub fn in_async_scope<'env>(fn: FnOnce(Scope<'_, 'env>), ex: &'env Executor) {
  executor.run_until_complete_and_async_drop(async {
      // The guard's AsyncDrop (cancel) awaits the completion of operations.
      // Thus releasing all borrows before return from `in_async_scope`.
      let async guard = unsafe { io_uring::scope_guard() };
      fn(Scope::from(&guard))
  })
}

Yeah, that is a puzzle that will be quite intricate to explain / check. But: replace in the sense of specialization should work. Running the usual Drop if no asynchronous version is defined for a field should still be sound. In this composition the Pin is only created if no mutable reference is created (and vice-versa). It will still be hard to figure out drop-glue for dyn Trait objects which do not have this new form of async-drop-glue method in their vtable. Sound specialization would suggest they'd be dropped normally which might be quite a pitfall.

If this panics, it would not cleanup req with let. let async fixes it and allows depleting the body.

(Sidenote, you should use "unwind" and not "panic" as panics can be set to abort)

I don't quite understand this part. Are you implying that the drop will be awaited during cleanup? That wouldn't work, unwind cleanup is not async. Or would it be polled just once? Or is the normal Drop invoked?

(NOT A CONTRIBUTION)

Hmm. Yes, it may make sense possibly to say that if a type provides AsyncDrop, its normal drop glue will not be called in an async context. Then you could provide pinning. And the trade off is that anyone who implements that method needs to make sure it also does whatever drop would've done. I'm not sure if this works out, but it seems like an avenue to explore.

The way to make "specialisation" work is to have a default method which forwards to drop:

trait Drop {
    fn drop(&mut self);

    fn poll_drop_async(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<()> {
        unsafe { Pin::get_unchecked_mut(self)).drop() }
        Poll::Ready(())
    }
}

The question then is whether or not that unsafe snippet in the default implementation is sound. Off the top of my head I think so but it may interact badly with pin projection in some way?

It's interesting also because it means you could do something like asynchronously close a file handle in the async drop and synchronously close it in the normal drop, without requiring more state. This would mitigate the problem of failing to async drop something.


I've also thought of two lints that I think could be on by default in rustc:

  • If something with an async binding is moved out of that async binding; there was no reason to make that binding async as it does nothing (and you should make sure where it was moved does have an async binding).
  • If something that has or may have (covering generics and trait objects) a meaningful async drop is dropped without being in an async binding, so the binding should be tagged async.

Between these two lints I think you would always get a warning if your async bindings did not exactly correspond to your values with meaningful async drops being dropped in your code.

1 Like

It must be, by definition. As part of this proposed async drop definition, types are not allowed to rely on async drop being used, and must always tolerate being synchronously dropped.

The default impl has the exact same behavior as if the drop glue called Drop::drop directly. Thus it must be sound.

There is a small caveat w.r.t. pinning: going through poll_drop creates a Pin<&mut Self> which wouldn't have existed if Drop::drop were called directly. This is fine, though; the Pin was never exposed to user code, so it is impossible that anything observed that guarantee temporarily being implied. Negative reasoning like this isn't the most comfortable, but if there's anywhere it's valid, it's valid here.

Reiterating, the library implementation has the exact same observable behavior as if the drop glue called Drop::drop instead of Drop::poll_drop.

If Pin had more implications on the dynamic opsem borrow rules, then it might be a more difficult question. That's not the world we have, though; Pin only impacts API, and &mut F is a shared mutable reference when F is an async-generated future. (I'm actively authoring a writeup on the constraints here. I got extremely close to finding a line where this isn't true, but ran into one too many things to create workarounds for. Writeup coming soon, hopefully.)

I was originally going to say that this scheme doesn't offer any state savings over the poll_drop_ready version which calls Drop::drop after, but it does save exactly one state: the state where the handle has been asynchronously closed and Drop::drop gets called. The difference is exactly that Drop::drop only handles the synchronous drop, not also the final step of asynchronous drop.

(The OP says “the type's Drop coroutine is await before the usually invoked Drop,” though; so per the OP Drop::drop does still get called.)

Though this is assuming that the only difference is only whether Drop::drop gets called after the polling; that both versions get a guarantee that if the poll is used, it gets polled to readiness. That's a second way that this proposal does save a bit of code (though not state): this proposal at least suggests that the async drop should only be used when there's an unsafe guarantee provided that the async drop gets polled to completion.

(I don't think the OP actually provides any such guarantee, and this was introduced in an intermediate post. Referencing is hard from mobile.)

Without this guarantee, Drop::drop needs to handle partially poll-dropped states. The completion guarantee allows Drop::drop to never need to handle these states (but does not reduce the number of states needed).

What a poll_drop does not and cannot provide, though, is async fn drop. Any async-droppable type must have space to represent the partially-poll-dropped states within itself; there is no way to only reserve extra space in the dropping path. This does mean that the extra state to handle async drop is present and "wasted" if async drop is not used. I have no intuition for whether this would have any impact when async drop is used.

I believe the intent is that the unwind would be suspended, and the future continue to be polled. Once we're Poll::Ready (or get cancelled via drop), the unwind resumes.

If we want to drive asynchronous drops from panic unwinds, then panic unwinds of async futures need to be able themselves to suspend. It's certainly somewhat surprising that a panic unwind wouldn't immediately propagate out of the task, but I think there's no semantic issue with panic unwinds being able to suspend. The most impacted would be unwind-to-abort shims need to be careful that they're ordered correctly such that an async let unwind suspension doesn't accidentally make unsound state available to other tasks.

(Cancellation via drop is itself another kind of unwind of the async task!)

The OP seems to be worded such that it's "allowing" missing an async drop to cause a leak, even if a synchronous drop is still done. If this is the correct interpretation, then certainly the expectation is that panic unwinds are suspended the same as async drop cancellation unwinds.

1 Like

That part was describing the intended operational semantics. Over the course of the thread it's become clear that not all work would be performed within the unwinding basic block of the method itself. The {generator}::advance(&mut self, ..) doesn't drop the generator itself, either. Rather let async fields are fields in the generator, live across await points (those points when the places are async-dropped based on scope). The polling-based finalization of these fields happens as part of AsyncDrop of the generator itself, after unwinding exited the advance method, inside the generator.

The RFC should get a more accurate and concrete description of how this interacts with executors of course.

Whether it makes sense to call poll once during unwinding is not completely clear.The comparable Python code does this, effectively, but as detailed in that section I'm also critical of this behavior as erratic and unpredictable. Additionally, once the variable have begun to be async-dropped we did create Pin and so it would likely be unsound to both partially async-drop and then Drop.

This one was, unexpectedly, something that was very unclear to me when writing the original text. The interpretation of not calling Drop when it is async-dropped is the outcome of the delibration here, I'll need to rework the full text to reflect all this.

In any case, it would likely be the exectors responsibility to actually do the asynchronous drops and perform them to completion when unwinding occurs as part of some polling. Nothing makes it unsound to Drop futures or leak them instead, but executors that do so wouldn't be able to offer any variant of run_until_complete_and_async_drop to enable the unsafe scope guard.

So to clarify, the immediate result of a panic unwind from async using async-let is still that the call to Future::poll will unwind normally. Further calls to Future::poll will exhibit sound misbehavior (may panic, block forever, etc.).

With existing async, locals are synchronously dropped as part of the unwind. [playground]

If async let is going to enable async drop in the face of panic unwinds, it needs to change drop order. Namely, any async let locals of the future are not dropped as part of the unwind, but when the future is dropped. Anything they borrow from will thus necessarily need to be async let as well[1], even if it itself doesn't have a different async drop.

If the unwind is not caught before unwinding through the future owning scope, the change in ordering is minimal; the future will be synchronously dropped as part of unwinding through that stack frame, and dropping the future will then drop all of the async-let locals.

The major difference is an executor running a task and catching the unwind. Perhaps an important semantic change, even just a synchronous from-unwind drop of the future is probably outside of the unwind, so a panic from the async-let drop glue is now not a panic-in-panic and won't immediately abort[2], and will see thread::panicking() == false.

More directly meaningful, the executor now has the choice to instead of synchronously dropping the future, spawn a new task to do an asynchronous drop of the future. This would be quite tricky for a stack block_on if this is supposed to bypass normal synchronous drop. If this part panics, it's complicated and I don't know what behavior to recommend[2:1].

There's of course also the much simpler alternative: unwinds don't change behavior, and just do a synchronous drop. (Calling poll only once is useless and potentially actively harmful; the normal drop impl can do whatever it needs to, unlike in Python which has GC finalizers rather than universal RAII.) This further limits the utility of async drop, though; if you're relying on an async drop finalization to prevent resource leaks, now unwinds are leaky without any possible recourse other than synchronous cleanup (which blocks the poll that panicked, as a reminder).


From this, the takeaway is that enabling async drop for panic unwinds is possible but extremely subtle. Panic unwinds' interaction with drop glue is already subtle and they're intended to be used for developer error only, so while still not great it's probably not too problematic if a panic results in blocking an async worker for a bit to do synchronous cleanup. It's perhaps possible to still have a synchronous drop that gives access to the async context, so an async cleanup task can be spawned onto it.

Having access to asynchronous drop in normal non-unwinding control flow and drop-cancellation-unwind is certainly beneficial, even if panic unwinds still always do synchronous drops.

A minor caveat to drop-cancellation: because of drop ordering, async-let locals will be drop-cancelled linearly. Only once the prior-dropped local has completely dropped will the latter-dropped be notified it should register cancellation. Because Rust async futures don't get started until you await them, though, this only matters for spawned task handles which join on drop (and root futures which start eagerly).


  1. This has a reasonable likelihood of leading to every local being async let. Changing drop order needs to be opt-in for the current editions, but perhaps future editions could use the delayed drop semantics by default? Unfortunately, this makes unwind-drop-to-abort bombs essentially nonfunctional in async. They'll still work in non-async code, but there's no way to do so for an .await if it only gets dropped when the whole future does. ↩︎

  2. Saying you can't unwind from async drop (that such an unwind becomes an abort) is simple and bypasses a lot of complexity. Which is a significant portion of why forbidding unwinds from normal synchronous drop is being considered. ↩︎ ↩︎

(NOT A CONTRIBUTION)

I think it would be better to think this through in terms of introducing a new concept of "async unwinding," and all of these assumptions you made here about what it would be like (e.g. that panicking would not return true) would not necessarily hold. I would imagine that an async unwind would move the future into the state to begin unwinding, and the executor would be expected to drive it to completion as a part of catching the unwinding.

The question is whether or not its possible to add that in a backwards compatible manner, so that in a scope where async-unwinding has not been set up through some new API, unwinding proceeds in its normal synchronous fashion, but if you are in a scope in which the async unwinding has been turned on, unwinding instead enters the async unwinding path and the executor drives you to completion using the same mechanism it drives the future to completion normally.

I'm not sure this is possible in a backwards compatible manner, but I think this is the thing to consider and sketch out. If it is possible, it should be confirmed whether or not it is involves changes that would block adding async drop. If not, probably it could be pushed back until after async drop ships.

All async drop types need to have a synchronous drop that guarantees resources aren't leaked, because they can't assume they'll be dropped from an async binding. So it's fine if they use that path at first when unwinding. Then later when the ability to use async unwinding is added, executor libraries can release new versions that use async unwinding, improving the performance of unwinding in their libraries.

If this isn't possible to do backwards compatibly, then we'll just have to accept that unwinding is slow, which would be a pity.

3 Likes