Wanted: A way to safely manage scoped resources in async code

I'd like to open a conversation on scoped resource management.

For the sake of this discussion, I define "scoped resource management" as enforcing that a resource is always released at a predictable position in code. This is useful in many cases, including cleaning up lower-level resource (e.g. closing file handles), cleaning up sensitive data, transaction-like behavior, and more generally ensuring that a protocol is respected.

Sync Rust

In sync code, there are at least two simple ways to manipulate scoped resources. While neither is 100% safe (e.g. double-panics or power failure), both are common idioms.

RAII

struct Resource { ... }
impl Resource {
  pub fn new(...) -> Self {
    // Acquire resource
  }
}
impl Drop for Resource {
  // Cleanup resource, even in case of (single) panic.
}

With-style manipulation

impl Resource {
  pub fn with<F, T>(cb: F) -> T
    where F: FnOnce(&Resource) -> T // + UnwindSafe, if you want to be sure
  {
    // Acquire
    // Call `cb` (possibly catch panics)
    // Release
}

Async Rust

In today's async Rust, as far as I can tell, there is no way to achieve the same result.

RAII

Well, Drop cannot execute async code, and it's not clear to me whether an async-friendly Drop can be designed, so for the time being, RAII is most likely out of the question.

With-style manipulation

Attempting to convert the sync version of with into something async quickly grows complicated (full code)...

impl Resource {
  pub fn with<F, Fut, T>(cb: F) -> T
    where F: for<'a> KindaAsyncClosure<'a, Resource, T>
      Fut: Future<Output=T>
  {
    // Acquire
    // Call `cb` (hopefully catching panics)
    // Release
}

trait KindaAsyncClosure<'a, T, U> {
    type Future: Future<Output = U> + Send + 'a /* + UnwindSafe */;
    fn call(self, r: &'a T) -> Self::Future;
}

impl<'a, F, Fut, T, U> KindaAsyncClosure<'a, T, U> for F
where
    F: FnOnce(&'a T) -> Fut,
    Fut: Future<Output = U> + Send  + 'a /* + UnwindSafe */,
    T: 'a
{
    type Future = Fut;
    fn call(self, r: &'a T) -> Self::Future {
        self(r)
    }
}

(thanks to jplatte for the help designing this)

...and cannot be used with closures as far as I can tell.

let result = Resource::with( |resource| async { format!("I have a resource {:?}", resource) }).await;

results in

implementation of `KindaAsyncClosure` is not general enough
`[closure@src/main.rs:36:34: 36:98]` must implement `KindaAsyncClosure<'0, Resource, std::string::String>`, for any lifetime `'0`...
...but `[closure@src/main.rs:36:34: 36:98]` actually implements `KindaAsyncClosure<'1, Resource, std::string::String>`, for some specific lifetime `'1`

The same error message shows up if we convert the closure-with-async into an async closure, so as far as I understand, async closures will not solve the problem.

So what?

As far as I can tell, this is a hole in Rust that blocks some real applications and cannot be solved by a library.

I do not have a solution to offer, but I'd be glad to see ideas on the topic.

1 Like

This appears to be a bug?

If this specific issue is solved, this will definitely be a step forward.

Note that I do not expect a reasonable developer to come up with type KindaAsyncClosure, so maybe this could be added to crate futures - if that proves sufficient.

  • there is a lazy normalization type bug in the Rust compiler that prevents usage of non trivial bounds, hence the current need for the helper trait, as well as the "need", very often, to box the futures; this is annoying / cumbersome, but ultimately something that will be solved, so no strict need to ask for a new feature; that being said, a convenience library (with a macro, probaly) to, in the meantime, smooth things up would be a nice idea.

    https://doc.rs/with_locals already offers a lot fo sugar for the continuation / scope pattern in sync code, and I have been experimenting a bit with async; what you mention ought to be possible modulo hard-coding the trait bounds on the future and return value (again, because of those lazy-normalization bugs).

    • (in that regard, the continuations could be taking a &mut reference, just to give the caller a bit more power; unless you're worried about mem-swapping shenanigans).
  • You may notice here that if cb is returning a Future<Output = T>, and your with function is returning a T, then either with needs to be async, or with needs be using an executor to drive that future to completion.

    • The latter has the advantage of indeed guaranteeing reliable destruction at a moment, in the control flow, that is controlled;

      It does have the drawback of making that become a blocking operation.

    • The former has the advantage of playing nicely with async, but it has "the drawback of being async": what I mean by that is that an async block is a state machine / generator / lazy producer, and that if it is not polled, then code such as the one in // Release may never be reached!

      • If the // Release code / drop glue is non-async in and of itself, then a scopeguard after the // Acquire and before the very first .await would "suffice", in practice, since even though a Future may not be driven to completion, it will, at the very least, very often, be dropped. But since that is not even guaranteed (the future may be leaked (literally, given the Pin contract)), we are kind of back to square one.

      • Otherwise (if the drop glue is async), you will need, again, to use some kind of runtime-related functionality to drive that code to completion (you could do that within a Drop impl / scopeguard, for instance, to tranform the async drop glue situation into the previous bullet, but then you have the potential issue of dropping the future after the runtime has been shut down).

    All in all, I think that I have "proven" that the async fn is actually an idea with way more pitfalls than is worth it, and so the function ought to be a non-async fn. That does not mean it has to be "sync": I think that a good compromise would be to enqueue the async fn-equivalent future / task onto the runtime threadpool, with a "leaked handle", we could say, so as to return immediately. You'd lose the option to know exactly where in the control flow the resource release happens, but you'd still get a pretty good guarantee that the resource will be cleaned up, eventually.

    The only remaining pitfall are the "runtime shutdown" issues; each runtime may have different requirements tradeoffs in that regard: this does hint at requiring the specific async runtime to provide such a spawn_and_try_to_guarantee_eventual_execution kind of function, should that ever be possible (obviously infinite loops (including deadlocks!) and aborts are something the runtime cannot guard against, but if we forget about those maybe the runtime could provide such a tool. Note that if they did, and if such "cleanup" logic spins indefinitely itself, or re-enqueues a "guaranteed eventual execution" task, ad infinitum, we could end up with he situation where a program that has finished running that keeps spinning on its cleanup task.

All in all, I think the whole "provide a cleanup code snippet to be run, eventually", has also its many pitfalls, even when using a runtime to run it in the background.

I think that when people need a specific code to be run before another one is executed, then, in a way, the release code is letting you acquire a state whereby the resource has been released. In other words, rather than implicit destruction, you could look into "explicit destruction", and using the "proof of release" tokens to operate off that. You'd then have the full strength of the type system that would show you that if the follow-up code is ever run, then it will do so after having cleaned up the resource. That's where you could be using callbacks, not to add epilogue code, but just to enforce linear types using the type system:

mod lib {
    /// Invariant lifetime.
    type Id<'id> = ::core::marker::PhantomData<fn(&()) -> &mut &'id ()>;

    pub
    struct LinearType<'id> {
        …,
        _id: Id<'id>,
    }

    pub
    struct ProofOfRelease<'id, Payload = ()> {
        payload: Payload,
        _id: Id<'id>,
    }

    impl<'id> ProofOfRelease<'id> {
        // Convenience function to allow the token to wrap an actual return value from the cb.
        pub
        fn returning<R> (return_value: R)
          -> ProofOfRelease<'id, R>
        {
            ProofOfRelease {
                payload: return_value,
                _id: Default::default(),
            }
        }
    }

    impl<'id> LinearType<'id> {
        pub(self) // private!
        fn new (…)
          -> Self
        { … }

        pub
        /* could be `async` */
        fn release (self: LinearType<'id>, …)
          -> ProofOfRelease<'id>
        { … }

        pub
        fn with_new<R> (
            …,
            // Or impl `KindOfAsyncClosure` with an `async fn with_new`.
            scope: impl FnOnce(LinearType<'_>) -> ProofOfRelease<'_, R>,
        ) -> R
        {
            scope(LinearType::new(…)).payload
        }
    }
}

And there your code could use the ProofOfRelease<'resource_id> to know that if it is reached (again, an async fn may not be driven to completion, and a panic could also lead to an early interruption of the control flow), then the resource has been released.


You may even try to combine both approaches (panic-resistant callback epilogue and drop token with generative lifetime identification) to try to not only feature code paths that can be proven to be run after the release, should the release happen, but that may also try to make maximal efforts to ensure the release does happen (but, again, you'll never have a 100% guarantee of that).

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.