[Post-RFC] Stackless Coroutines

I am creating this thread for continuing the discussion of the Stackless Coroutines RFC.

Regarding yield returning tuple of args passed to subsequent invocations

That just seems like a very weird and hard to teach mechanic. I think that a coroutine should return an anonymous type implementing Iterator. Also I’m fan of ||* {} syntax for coroutines, although I know it’s not backward compatible. I will be using ||*: {} but I’m not proposing any particular one.

Motivating example:

let vec: Vec<_> = (0..3).iter()
    .flat_map(|x|*: { for y in x..3 { yield y; } })
    .collect();
// vec is [0, 1, 2, 1, 2, 2]

I also imagine async functions using syntax similar to ||^ {} and returning T: Future and async coroutines ||^* {} returning T: Stream.

The motivating examples in the RFC described implementing some low level stuff and therefore aren’t very motivating for me as an end user. :confused:

Edit: one problem I noticed with that is that Iterator doesn’t have a “complete” value, which is kinda unfortunate, because even std has to emulate it in some places like where it iterates over Results and the Err variant always signals completion with error. So there are 3 solutions:

  1. Simply don’t allow returning value from coroutine. Only allow yield and valueless return. Make ? operator yield an Err and then return.
  2. Add Final/Complete/Return type param to Iterator which defaults to () and done() method which returns Option<Self::Final>
  3. Make a new trait Generator<Item, Final>

Edit2: More examples. Creating a stream that emits numbers 0-9 every 100ms.

// current
let numgen = stream::iter((0..10).into_iter().map(|i| Ok(i)))
    .and_then(|i|
        timer.sleep(Duration::from_millis(100))
        .and_then(move |_| Ok(i))
    );

// using async coroutine
let numgen = ||^*: for i in 0..10 {
    await timer.sleep(Duration::from_millis(100));
    yield i;
};

Coroutines do implement Iterator. Your example may be written as

let vec: Vec&lt;_&gt; = (0..3).iter()
    .flat_map(|x| || { for y in x..3 { yield y; } })
    .collect();

That doesn’t fix the problem with yield but I guess we can live with it.

Another thought I had: What will ? mean inside a coroutine? If it returns Err (instead yielding) then it automatically makes such coroutine not compatible with Iterator trait, because it has a return type other than ().

I really wish that Iterators had a Final item type which defaults to () but automatically turns into Result<(), E> ;(

let files = ["a.txt", "b.txt"];
let lns = files.iter().flat_map(|f| ||
    for ln in File::open(f)?.lines() {
        yield ln?;
    }
);
for ln in lns {
    // where do the errors go?
}

I must have misunderstood then what the problem was?

That it has a type ... -> CoResult<Result<T,E>, ...>. For iterators that would map to Iterator<Item=Result<T,E>>.

I’m pretty sure it would be CoResult<Yield=T, Return=Result<(), E>>. If we want to map it to Iterator<Item=Result<T, E>>:

impl<T, Y, E> Iterator for T
where T: Fn() -> CoResult<Yield=Y, Return=Result<(), E>>
{
    type Item = Result<Y, E>;
    fn next(&mut self) -> Option<Result> {
        match self() {
            CoResult::Yield(item) => Some(Ok(item)),
            CoResult::Return(Ok(())) => None,
            CoResult::Return(Err(error)) => {
                // how do we first return error and then none?
                return Some(Err(error));
                return None;
            }
        }
    }
}

Whoops, you are right! Shame on me: ? does a return on failure not a yield, of course!

I think there needs to be some syntax for declaring coroutines as moveable or not, since this is part of the interface contract. If a coroutine is declared moveable, then I guess either references to stack values are disallowed (across yields), or they cause automatic boxing of the referenced values.

I think the obvious syntax would be -> impl Generator/Fn/Whatever + ?Move, if we add a Move trait that implied by all generics (by default, with opt-out).

The interface contract doesn’t matter inside a function (as per previous decisions), so encoding it in the exported interface itself makes sense to me.

2 Likes

I’m not sure how directly relevant it is to the present discussion, but I’d like to point everyone thinking about Rust and asynchrony at the essay “Some thoughts on asynchronous API design in a post-async/await world” by Nathaniel Smith. It’s talking in terms of Python 3 and its asyncio library, but I think there’s a general design lesson in there.

2 Likes

tl;dr; I’ve implemented an initial version of #[async] and await! and it’s going great, I’d love to help keep this discussion moving forward to get this into nightly compilers for others to easily provide feedback for.


Hello everyone! It seems the discussion here has sort of stalled but I wanted to see if I could help breathe some new life into it. I’ve been thinking recently how async/await syntax would potentially massively help the usage of the futures crate, and the stackless coroutines/generators that we’ve been talking about here are absolutely perfect for this! In that sense I was curious what was needed for an MVP here. Something that doesn’t do 100%, may still have some unresolved questions as it’s unstable, but is solid enough to experiment with.

One of the first things I noticed is that in @vadimcn’s excellent summary of the contentious points on his RFC most of those don’t actually apply to an async/await MVP! Recall that the points here are about couroutines, not async/await, but to review each in summary:

  • asynchronous streams - while relevant for a full implementation, we can likely punt on this for a “let’s get our feet wet” scenario. This should definitely be considered before stabilization, but probably doesn’t impact the core async/await system much.
  • the self-borrowing problem - ok this is a big issue, but for now my thinking is that an “MVP quality async/await implementation” punts on this entirely. The compiler errors on any borrows active across yield points.
  • traits - doesn’t actually matter for async/await! No matter the trait construction for coroutines/generators, async/await will have a translation for it both ways.
  • declaration syntax - also doesn’t matter for async/await! Users would only interact with async/await, not the underlying coroutine/generator (in theory)
  • top level generator functions - like above, highly relevant for coroutines but less so for async/await if async/await just uses coroutines as a building block
  • coroutines vs generators - as you can imagine, doesn’t matter too much for async/await! The important part is compiler-generated state machines which is what’s happening here.

So given all the reasons that RFC 1823 was closed it turns out that most of them don’t end up applying to an MVP of async/await! This got me thinking how difficult it would be to prototype an implementation of this RFC just to see how far we can get with an async/await MVP. The intention here was to just toy around with async/await, see how much it can benefit, and use it as data to evaluate how to best move forward with the coroutines/generators story.

It turns out that @Zoxc was already way ahead of me and has already implemented most of generators! @Zoxc’s branch I learned recently already implements a ton of generators. The implementation is not the one described in RFC 1823, but is similar in some respects. The takeaway that I had from this though was that @Zoxc’s done some awesome leg work in implementing the bare fundamentals of generators (e.g. MIR translation, type checking, etc). Much of this work should be usable in any implementation of couroutines/generators, and I was quite eager to start developing async/await almost immediately on top of this!

I worked with @Zoxc to help identify some remaining ICEs on his branch and have resulted in an initial version of an async/await implementation for Rust. This implementation only works on @Zoxc’s branch (e.g. requires generators) and uses the proc_macro feature to implement #[async] and uses other unstable features like conservative_impl_trait and use_extern_macros for some zero-cost and ergonomic goodness. You can find more info about this in the README.


Ok, so that’s a lot to digest! Where to go next? From what I’ve learned so far I’ve concluded:

  • Generators/coroutines work perfectly for async/await, and async/await in a “usable” form is possible even today with that implemented!
  • @Zoxc’s branch looks to be a pretty solid implementation of generators. I haven’t done a lot of correctness testing yet, but I also don’t have any known bugs. I’ve ported a good chunk of sccache to async/await and it’s still compiling LLVM correctly though!
  • The attainability of a “production ready” implementation of async/await seems to be much nearer than originally thought. I’m thinking that this may change the calculus around decision making here.

So in general I’d like to help spur along discussion here and see if we can reach a consensus on how to move forward. My focus here is mainly on getting an initial implementation of generators/coroutines landed in the compiler on nightly. Now this is a pretty major feature, and landing something in the compiler ends up having a lot of inertia, so we need to be sure to tread carefully. I’m hoping, though, that we can get all stakeholders on board.

My thinking is that we can land, likely as a form of @Zoxc’s branch, an implementation of generators which is not set in stone but has the major design decisions taken care of. For example the type checking and methodology of defining a generator/coroutine would probably want to be mostly hammered down and agreed on, but the precise syntax and/or extra syntax sugar can probably wait for later. Another example of this is that APIs as they relate to libstd can probably mostly be glossed over at this stage. I’d naively assume at least that most compiler implementations would be easily translateable to “similar traits” or similar protocols for invocation, so whether we call a method foo or bar probably isn’t so relevant for an initial implementation.

So in order to move forward, here’s what I think we should do:

  • Identify key features in @vadimcn’s RFC 1823 that block landing an initial implementation.
  • Identify key differences between @Zoxc’s branch and @vadimcn’s RFC, working towards a resolution here
  • Work with the compiler team to ensure it’s ok to move forward here
  • Start hacking away at async/await caveats!

What do others think? Am I trying to push this way too quickly? Are we still far away from consensus to put something in the compiler? Curious to hear others’ opinions!

37 Likes

Thank you so much. I’ll definitely be playing with this.

I understand my post here might be out of place, especially since what I’m going to say does not give any meaningful critique issues that have come up in the related RFCs. I’m sure you (and and everyone involved) are already aware, but I just wanted to make it extra clear so you know how much having access to async-await means. It’s been the single feature that, for years, has stopped me, my company, and a lot of people I know from pervasively using Rust every day.


I can’t begin to express how important this is for me. I don’t think async and await can come fast enough.

Of all the proposed ergonomics and language improvements proposed for Rust, async, await is the biggest and the only showstopper that prevents me (on a personal level) and also the entire company where I work from pervasively adopting Rust and using it in every new project we touch. It prevents us from all-inning on Rust, because writing callback-based code (whether through callback passing, or Future::and_then).

I don’t mean to say that the other Rust language improvements that are happening now aren’t important. They are, and I’m looking forward to them too. But the pain of not having them is nowhere near the pain of not having async-await.

For small, single-purpose programs, callbacks and Future::and_then is readable and can be relatively clean. I’ve used futures extensively when writing TCP proxies, BGP routers, and other programs whose purpose can be described in a one line. That’s OK.

But in large, complex services (100k-1m lines) where there’s a lot of logic driving a bunch of different services, and each service serving a thousand (or even only one hundred) connections, having async, await sugar is so powerful that we are reluctantly “forced” to fall back to using C++ (co_await), C# (await), and Go (native).


Right now, translating a 100 line await function to Tokio means a lot of spaghetti, nesting, and boilerplate (especially in returning futures allocation-free futures when types don’t match, like for an if pre-condition { return} else { happy route }). An otherwise simple function quickly becomes an await-supporting language turns into a mess of multiply indented hundred lines, especially so if you try to add loops and retries.

The alternative is to write code that spawns a thread per task, but even with only one hundred concurrent tasks, the cost of context switching quickly matches the cost of the work itself (!)

Finally, it is my opinion that once we have async-await, only then will we begin to see a sprawl of high-quality crates that can begin to be used in networked environments where thousands of operations are run every second. Right now, lots of libraries block, and that’s an instant disqualification for any environment where concurrency and latency are important (which is to say a lot).


I completely understand that the community and core teams want to get this right, but I’d definitely prefer to see an MVP in the compiler today (under a flag), rather than five months from now.

17 Likes

You’re right. async/await is definitely very important to making futures ergonomic and easy to use. I think probably most people just didn’t expect that we could have a working implementation this quickly.

@alexcrichton makes a good point that the async/await syntax can be expanded to any underlying generator system we use. I wonder if we couldn’t move very aggressively on async/await - making them available behind a feature gate on nightly before we even settle on what API to expose as generators (that is, you can’t even use a feature gate to get to the underlying generator API).

8 Likes

Thanks for the thoughts @annie! One thing I’d be very curious to hear about is whether the futures-await crate as-is lives up to your expectations of what async/await in Rust would look like. I’m always worried that the caveats listed in the README are quite large (especially the borrowing one) and can limit the impact of having a full-fledged async/await implementation. I’ve personally not found them too limiting (modulo the error messages which should get better with compiler support) so I think the implementation as-is with caveats can have a huge impact, but always good to get others’ opinions!

For me at least knowing to the degree that the current implementation (even with its known limitations) solves existing problems is super helpful, namely it helps in prioritizing work and stabilization to understand which subset has the most impact.

I agree with @withoutboats that it’d be awesome to see if we could stabilize some things before others to get this out to stable, but I haven’t personally gotten that far in my thinking process yet, I’m just hoping to get it into nightly right now :slight_smile:

2 Likes

I am very excited to see this land and to see experimentation! My one concern is that I really want to avoid a scenario where we wind up with a push to stabilize without a clear RFC and ultimate specification. This seems like a good place to use a more aggressive experimentation process of the kind we’ve talked about, and I think a key part of that process has to be that we are working towards a “proper” RFC (in fact, for most big RFCs I think we wind up doing a lot of impl work up front anyway, it just doesn’t always occur on master behind a feature-gate).

To that end, I propose a few things:

  • let’s make a big effort to make a comprehensive test-suite with good coverage of corner cases and so forth
    • the tests should be grouped into src/test/{run-pass,compile-fail,ui}/generators/
    • I would include for example the “bad error messages” that @alexcrichton referred to as corner cases; I’d love to see ui tests for those scenarios so that we can track and try to improve them
  • I think the ideal would be if there were people interested in working on the RFC in parallel, and trying to keep it up to date as things evolve (perhaps at some lag, but not too much)
3 Likes

I’m super keen on helping out fleshing this out and contributing to an RFC. Though I’m not exactly sure how to go about doing that, and how it’s coordinated. It seems that a lot of discussion takes place outside of GitHub and the discussion forum, it’s always felt a little inaccessible and hard approach Rust Internals when you are not already involved.

[quote=“alexcrichton, post:14, topic:5021”] One thing I’d be very curious to hear about is whether the futures-await crate as-is lives up to your expectations of what async/await in Rust would look like. I’m always worried that the caveats listed in the README are quite large (especially the borrowing one) and can limit the impact of having a full-fledged async/await implementation.[/quote]

I think it’s a great initial demo. I spent a bit of time on it last night and I have to say that I absolutely love it! Even in its current incarnation, it’s a huge improvement over writing futures-based code.

Personally, I don’t think the current set of limitations hinder the exploration of an await implementation. They’re unfortunate, but they’re easy to work around. We will need cross-suspend-point borrows and self borrows before people start using it, but I’m glad that we have an implementation at all right now.

When self borrows are required, I’m using the fn(this: Box<Self>|Rc<Self>) pattern, which is exactly the same pattern that’s required for a lot of futures code.

I’m going to spend next week porting over an experimental GRPC service over to this. It was previously using a modified compiler that hackily that transformed await-like code into an (unsafe) state machine, so I think it should be more or less straightforward.

6 Likes

I meant making it available on nightly without making generators available. Perhaps I'm being overly conservative, but making any API for generators available on nightly obligates us to it somewhat because people will start using it; I want to avoid even making that commitment to any particular generator API. Async/await on the other hand is a much smaller space of possibilities.

Basically, I'd like to see if we could make async/await available on nightly ASAP, so casting off every commitment we possibly can seems best to me.

Yeah I definitely sympathize with this feeling, and I'm sorry it feels this difficult to help contribute here! We pride ourselves on making Rust as easy as possible to contribute, and this needs to encompass everything including design as well!

I myself am at a bit of a loss of how to best make progress here, I was hoping this discussion could reignite interest and push the discussion along to a conclusion :). In that sense I think the best way to contribute right now is to help provide data on answering open design questions. For example the question of "to be usable does this need borrowing-across-yield-points?" seems to be "no", but confirmations of that are always good! Other more low-level questions about the generator implementation itself I'm hoping @Zoxc or @vadimcn can chime in with to help drive discussion here.

Oh yeah so this is actually something else I ran into pretty quickly. If you've got a trait like:

trait MyStuff {
    fn do_async_task(??self??) -> Box<Future<...>>;
}

It's actually quite difficult to use this! Right now there's a bunch of caveats:

  • Ideally you want to tag this #[async] but this is (a) not implemented in the procedural macro right now (it doesn't rewrite trait function declarations) but also (b) it doesn't work because a trait function returning impl Future is not implemented in the compiler today. I'm told that this will eventually work, though!
  • Ok so then the next best thing is #[async(boxed)] to return a boxed trait object instead of impl Future for the meantime. This still isn't actually implemented in the futures-await implementation of #[async] (it doesn't rewrite trait functions) but it's plausible!
  • But now this brings us to the handling of self. Because of the limitations of #[async] today we only have two options, self and self: Box<Self>. The former is unfortunately not object safe (now we can't use virtual dispatch with this trait) and the latter is typically wasteful (every invocation now requires a fresh allocation). Ideally self: Rc<Self> is exactly what we want here! But unfortunately this isn't implemented in the compiler :frowning:

I think this is actually something I'll add to the caveats section of the README, it seems like if you're using traits returning futures you'll run into this very quickly and unfortunately there's no great answer today. In sccache I got lucky because the trait didn't need to be object safe, so I just used self and cloned futures a bunch.

2 Likes

I just want to leave my opinion here. I’ve seen the fn* syntax floating around to denote that a function is a coroutine/generator. I was wondering if we could use something like gn instead. For example

gn range(start: usize, end: usize) -> usize {
    for i in start..end {
        yield i;
    }
}

That way we’ll still know it’s a generator function, and it will have a nice symmetry with the fn keyword.

3 Likes

fn is to function as gn is to generator.

2 Likes