Pre-RFC: Allowing async/await in no_std

The current implementation of async fn and await! depend on thread_local! as a work-around to what appears to be a fundamental shortcoming of the existing generators implementation which also prevents these features from being used within !#[no_std] projects.

An async fn is currently implemented as a generator that yields whenever it needs to await!. This generator takes no arguments and produces a final result as one would expect.

When the generator is evaluated, there is no mechanism to pass the future context into the generator and thus no means by which the await! macro can determine the context under which it should poll its future. This appears to have been done so that an async fn can be written without any explicit mention of a context for improved ergonomics.

The current implementation works around this by using a thread_local! pointer to refer to the topmost context in the call stack which can be used by the await! macro. This alone prevents the async/await from use in !#[no_std] projects.

It seems that the solution we currently have is an indicator in the shortcomings of the existing generator implementation. The thread-local pointer is doing work that is already achieved by the call stack and the current semantics of yield do not seem to entirely match the semantics of the problem they were built to solve.

A co-routine yields when it progress becomes dependent on a resource which is currently unavailable. The current semantics of yield prevent any information being passed back to the generator when that resource becomes ready.

Some existing suggestions have been made for bi-directional moves to occur upon a yield. However the types at the input and output at every yield point in a generator would need to be identical with a known size and static type (assuming no alloc) which would be neither ergonomic or practical. It would also require that a generator have a different entry-point for being started from that used to resume it.

Instead, I would propose a change to the internal semantics of borrows within a generator. Currently, borrows to data which has a lifetime bound by the instance of the generator are not allowed to live across a yield statement so as to ensure they remain valid (I suspect this is because the entire generator, and thus the variables within its context, can have its location moved between the yield out of the generator and the resumption of the generator).

I think that this is perhaps unnecessarily strict and instead could be replaced with a rule stating that only borrows to data with a lifetime that is not bound by the generator may exist when a generator is resumed. This would imply that borrows may exist after a yield. Furthermore, this would allow for a reference to a value with a lifetime bound by the generator to be yielded (the existing borrow checker rules should enforce that the generator cannot be resumed until all such borrows are dropped).

This would allow for a substantially more useful implementation of await! that could be used without any dependency beyond libcore. The await! would augment its future such that the result is stored inside the generator rather than being lost to its caller and yield a reference to the augmented future. The implementation of poll for an async fn would then expect a trait object &mut Future<Output = ()> whenever the generator yields. It would poll that future to completion in place of the generator (the final result of that future would end up being stored within the generator itself removing any need for dynamic allocation). After completing the intermediate future, poll would then resume the generator which would use the result of the future that had been completed.

Edit (2018-09-23): The original content of this post is below. It was changed to better frame the issue and a possible solution.

The current implementation for async/await was implemented using thread-local stroage for ease of implementation, however this precludes the use of this feature in #![no_std] projects.

I’d like to make a pull-request resolving this issue, but it appears there is no actual github issue relating to this. Should I create an issue before making the pull request?

async/await is in sufficiently active development that I would absolutely recommend discussing before putting in implementation effort.

I would suggest chatting with folks on Discord, such as in #wg-net-async, and/or asking on the tracking issue.

2 Likes

Why would this not be ergonomic or practical for the async/await use case? The yielded value is already a static type, and the input value will just be the context argument. I also don’t follow why the generator will have different entry points, it will always need the context available, so will always have that as an input argument to every entry point (unless by “started” you mean “created”, but that’s already a different function to the resume call).

Not in immovable/self-borrowing generators, which are what async/await are built on top of.

For a bit more context I found when looking into solving the no_std async/await issue some time ago:

The original implementation of generators included arguments, these were dropped to reduce the surface area and because they were not needed for async/await with the futures 0.1 api. They worked by having a “keyword” gen arg (I may have spelled this wrong, it was a long time ago I read through the PR), this meant you could do something like

let foo = || {
    let a = gen arg;
    yield ();
    let b = gen arg;
    return a + b;
};
assert_eq!(foo.resume(5), GeneratorState::Yielded(()));
assert_eq!(foo.resume(6), GeneratorState::Return(11));

This was before self-borrowing generators existed, so you could not keep a borrow to the argument across yield points. I think the only change that would have to happen to support that is the type of gen arg would have to have an artificial lifetime added so that it could not be borrowed across yield points, while still letting the real internal state be borrowed.


Personally I dislike the gen arg “keyword”, I think it’d be simpler to keep a syntax closer to normal closures, and just have “magic” variables that can change value across yield points. This makes type annotation easier, and could use tuple unpacking like the Fn traits to support multiple arguments

|x: i32| {
    let a = x;
    yield ();
    let b = x;
    return a + b;
}

But this is fundamentally the same as what was implemented.


As far as I know this is enough to support async/await taking a context argument (although, thinking about it now I wonder whether the HRTB needed would fit into it, or if it would have to do the same lifetime erasure the current TLS implementation needs).

Despite async functions being immovable, the semantics of yield doesn't seem to currently reflect the lifetimes of variables within a generator all that well currently. The following, for example, isn't allowed because the reference produced before the yield cannot be guaranteed to be valid after the yield.

async fn foo(x: i32) -> i32 {
    let a = &x;
    yield ();
    let b = &x;
    yield ();
    *a + *b
}

The solution I am proposing though is that an async fn would produce a Generator<Yield = &mut Future<Output = ()>, Return = T> instead of the Generator<Yield = (), Return = T> that it does now. Because the implementation of poll would know about the context, it can simply poll the returned future and because the future is returned &mut, it can be a trait object where the concrete type is known only inside the generator.

The implementation I have in mind utilising this would be similar to the following.

macro_rules! await {
    ($awaited:expr) => {
        let awaited = $awaited;
        let mut awaited = AwaitedFuture::new(awaited);
        let result = loop {
            match awaited.result() {
                Some(result) => break result;
                // `awaited` must have a lifetime beyond the scope of the function
                None => yield &mut awaited;
            }
        }
    }
}

struct AwaitedFuture<T> { ... }

impl<F, T> AwaitedFuture<T>
where
    F: Future<Output = T>,
{
    fn new(future: F) -> AwaitedFuture<T> { ... }
    fn result(&mut self) -> Option<T> { ... }
}

impl<T> Future for AwaitedFuture<T> {
    type Output = ();
    /// Evaluates the inner future and stores the result internally,
    /// instead producing a `()` to the caller.
    fn poll(self: PinMut<Self>, cx: &mutContext) -> Poll<()> { ... }
}

I propose this as a solution as it would produce the same interface as currently exists, but the above implementation runs into issues regarding internal borrows. With the old lifetime implementation, the error suggests that the borrowed item only lives until the end of the scope it is within but that yielding it requires it to live beyond the lifetime of the function. With the NLL system, the error produced suggests similar.

This is an issue because if a mutable reference is yielded, the function cannot be resumed whilst that borrow continues to exist. As such, the borrow shouldn't need to exist beyond the lifetime of the generator and should not be assumed to still exist beyond the yield statement.


Why would this not be ergonomic or practical for the async/await use case?

This is impractical as in the case for coroutines, you more often want to wait on a different type of value at each yield than you want to on the same type (say I await a future pending a file IO and then await a timeout). Because individual yield points are indistinguishable to the caller of the generator this can only be achieved if an additional type is generated to cover all of the return types expected at the yield requiring a far more complex implementation of await! and would also require a runtime check within the generator to verify the type returned from the yield.


The gen arg solution would also work but it feels a bit too magical to me as does the even more magical alternative you present. The latter would be particularly baffling to most users due to quite different semantics of the variable.

I'm not necessarily looking for a means to provide passing data into a generator at yield points, just a way of using the stack we already have to link the future context to the future passed to await! rather than avoiding it altogether.

This works fine in both the current async/await and generators implementations. (EDIT: Since literally the very next nightly broke this example, here's an updated playground).


I think this would need GAT to be sound, otherwise there's no way to tie the yielded item lifetime to the generator borrow.

trait Generator {
    type Yield<'a>;
    type Return;
    fn resume(&mut self) -> GeneratorState<Self::Yield<'_>, Self::Return>;
}

struct GenAsync<G>(G);

impl<G> Future for GenAsync<G> where G: Generator, G::Yield: Future<Output = ()> {
    type Output = G::Return;
}

With GAT, this does seem like it could be implementable. At a first glance it's likely to be much more difficult for the optimizer though.


But in both cases the actual futures awaited are internal state to the generator created by the async fn, the API provided by the generator while waiting either future is basically fn(&mut Context) -> Poll. The types I am thinking of are:

trait Generator<Args> {
    type Yield;
    type Return;

    fn resume(&mut self, args: Args) -> GeneratorState<Self::Yield, Self::Return>;
}

struct GenAsync<G>(G);

impl<T, G> Future for GenAsync<G>
where
    for<'a> G: Generator<&'a mut Context<'a>, Yield = (), Return = T>,
{
    type Output = T;
}

as I said before I'm not entirely confident on the HRTB part of the args parameter, but I believe it should be possible to come up with something like this that would work.


You're right, my issue was only with returning a reference, this does seem to be fine.


I do think this lines up better with the way that the generator is used and is probably less magical overall and would probably make a lot more sense. (My only real concern was that perhaps it would require a bigger implementation effort). This does seem to be a more sane approach overall.

I think the syntax I would expect this to have however (given the existing syntax of closures) would probably look something more akin to:

let gen = |arg: A| {
    let y: Y = foo(arg);
    let a: A = yield y;
    let r: R = bar(a);
    r
}

Where A, Y, and R refer to Args, Yield, and Return respectively. I would prefer a syntax like this as resume is always used as an entrypoint to the generator, the first will begin before the first statement and all later ones will be as the result of a yield. yield marks both an exit (for the Yield case) and a re-entry point and return only marks an exit point (for the Return case). I think this syntax would better reflect the behaviour of such an implementation

The only issue with a model like this is that in the case of async fn, the argument needs an explicit name that await! is aware of the first time it is used. await! could be implemented to take it from a lone yield, but that would involve an unnecessary Pending response upon each yield prior to the future being polled. Hence, I assume, the motivation for the gen arg alternative.

My concern with gen arg is that it isn't immediately obvious what is happening with that syntax nor is it obvious why such a strange mechanism for passing arguments is being used without an understanding of the background.

Just to check my understanding of whether HRTB would allow passing the context without having to do any lifetime shenanigans, here’s a complete example of mapping a (manually implemented) argument taking generator into a future. I did screw up part of the type above, the Args associated type should be an input type parameter on the trait and the poll variants should have been spread across Yield and Return, but after fixing those everything falls into place pretty nicely.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.