An alternative way to choose `future.await()`

I have tried to substantiate several arguments for adopting a special form of enabling future.await() into a blog post, it would certainly be too long overall for direct inclusion. It had been mentioned a while ago in the second Github thread but I think deserves some more attention.

Read it here:

In essence, the most convincing argument for it is a solution to confusion around field access, addressing the biggest concern with the current favorite pseudo-field syntax and going beyond space delimited await. The rest of the post goes into details on how to provide that solution with minimal additions to the surface language. That is followed by some context on how this achieves additional orthogonality and extensibility at small costs to language syntax and semantics.


I don’t see this post adding any new information.

We’ve already established that .await looks like a field, but it’s not,. and .await() looks like a method, but it’s not quite either.

For example, await may never “return” if the future is dropped, in a way that’s different from panicking or infinite loop. To you that’s not an important distinction, to me it’s a deal-breaker that makes it definitely not a method, semantically, regardless of how you dress it up in traits and call conventions. But that’s not a new information, just a difference in valuing of different aspects of what we already know.

In the end, consider that awaiting is going to be special either way. The field like syntax can be explained as dot operator and a control flow keyword. It can mean whatever we define it means, and users will learn it.


I don’t completely agree. The method for awaiting with minimal context is:

async fn wait_on<F: Future>(arg: F) ->F::Output;

The use of await as a method in the style of Trait and syntax as in that post is then exactly to provide that primitive, and only allow this minimal one to actually make the yield call, which is achieved by giving it special permissions differentiated through extern "abi" notation so that the power of an awaiting yield is contained in it and not possible in all async fn.

I thought this was clear from the paragraph about the current state of the implementation of what .await expands to on nightly and how it looks like a function in nearly all regards. Viewed through this eye, all async methods have the same story for stack unwinding when calling into other async fn, even the special await looks like any other call to an async method from the outside. If it wasn’t clear, I may have to consider amending the post a bit.

Maybe it would be sufficient to mark it as a #[lang = "await"] and the abi is redundant but imho it serves as additional clarity. It also seems to have a better story for future additions.

I know, but: await may never “return” if the future is dropped, in a way that’s different from panicking or infinite loop. To you that’s not an important distinction, to me it’s a deal-breaker that makes it definitely not a method, semantically, regardless of how you dress it up in traits and call conventions.

I did realize this. But even for this path of unwinding, the call interface of all async functions is then uniform. The rule would simply be that calling any async fn can lead to a new form of unwinding. Note that Await::await_fn is also marked async fn according to the concept. I’d view this as more consistent, as calling a standard fn within an async fn can not take this path of unwinding.

1 Like

That’s the blegg-rube problem, or is-pluto-a-planet problem. We 100% agree on the objective facts, but not how to call them:

Probably? Let me try to explain what puzzles me about your classification. You seem to say that await should be treated differently because it can cause unwinding unlike any other. Therefore, I read, it should not look like an async fn.

What’s weird about this classification for me, is that unwinding will continue transitively up the call chain of async functions. Thus even calling any async fn can cause unwinding unlike what was established previously, because those can eventually await. I would therefore propose to shape the system in such a way that those two operations do not fundamentally differ, as they can lead to the same effect. One such way is to present await in the form of an opaque fn.

  1. Unwinding is optional in Rust. panic=abort is a valid implementation.
  2. Panic is for bugs, not for control flow.


  1. Yielding is an inherent feature of Futures. Not optional, can’t be changed.
  2. Dropping of cancelled futures is a normal, intended behavior.

To me this makes awaiting more like a fancy return foo() rather than foo() panicking, so await-is-a-method sounds like adding traits and ABIs to make return() work.


So you are wondering why not expose an await fundamental operation, and where this differs other fundamental operations that are exposed as such is. That takes a bit of consideration.

The first part of my answer to this is, how you noted yourself, await is not the fundamental operation, but yield is and we did not want to stabilize yield on its own but only the composition as await. Hiding it inside a lang-item seems like a sensible choice until it is to be stabilize together with other features, or possibly never.

Secondly, consider that panic!() is not the fundamental operation afterall. It calls rt::begin_panic and not the extern fn __rust_begin_panic and does a bunch of code managing formatting the message and configuration dependent behaviour (such as checking for panic=abort). Rather, the substantial benefit offer by rt::begin_panic is to present a stable interface in terms of the type system and it does that well. The same should in my eyes be true for Await::await_fn where the stable type interface is async fn itself.

And lastly,

I perceive this in favor of an async fn form of await. For this, let me clarify that the call as it appears at the call site is still considered a keyword and I’m not proposing being able to write Await::await_fn(future) explicitely yourself. But reasoning about intended behaviour should be as easy as possible, and not that is simplified by not evoking a sense of await itself being special but rather to apply that thinking to the function form that has a syntactical representation. I think, it is otherwise much easier to forget to apply the same reasoning to any async call.

1 Like

await is the fundamental operation within an “async context”. If we ever get “async generators” as a way of writing Streams then you will be able to use both the await operation and yield operation within an “async generator context” and do different things. (That the async transform is currently a thin layer on top of generators is 100% an implementation detail that should not affect the syntax).

await may be not quite a function call, but I’d say it is more similar to one than it is to field access. If you squint, await is a form of continuation-passing style, and continuations are readily represented in syntax just like function calls.

That said, other continuation-like constructs in Rust – break, continue and return – don’t use function call syntax. The lone, ahem, exception being panic! + catch_unwind.

In what way do these fundamentally differ? yield takes some arbitrary argument while await takes a Future, both deliver a value to some other code (return value to the awaiter vs. the arguments to the called coroutine). But both operations will pause our current execution to resume at a later point with some new value from the environment—possibly, depending on stream with arguments or without, a simple (). We are not guaranteed that the next executed instruction will be at some exact particular point (neither the caller nor the callee) exactly if the current function is declared async fn and can when it is simply declared fn (i.e. anticipating stabilizing simple generators as well).

And I strongly suspect that generators and async generators will also be subject to unwinding from any yield point. How about declaring both operators as the same extern "async-yield" then, the exact form that I simply put as extern "await-fn" is open to change. In that regard, it may be even more valuable to make this assertion now.

There’s a terminological problem that appears often which this conversation between Nemo and HeroicKatora epitomizes. We use the term “generator” to mean both the surface-syntax language feature which is currently unstable (and will likely change enormously before we stabilize it), and the MIR-level state machine transformation which underlies both that feature and the async/await feature.

I just want to propose that we try to clarify this distinction:

  • generators are an experimentally accepted language feature, subject to much change. They are a variant on functions that can return more than once, using the yield keyword.
  • coroutines (or stackless coroutines as may be necessary) are the underlying language mechanism by which both generators and async functions are transformed into a state machines.

The primitive operation for yielding control from a generator is the yield operator. The primitive operation for yielding control from an async fn is the await operator. Async generators, the logical combination of async fn and generators, provides both operators. Async fns do not “compile to generators;” like generators, async fns compile to state machines using the stackless coroutine mechanism.


To make sure on terminology, when an Async Generator yields it can actually be said to await in addition as well, because it both returns more than once and also transfers control from an async fn?

I don’t know what this means. All three concepts built on coroutines “yield control,” but expose different sets of operators which can yield control. Your use of code highlighting especially confuses me: I’m trying to distinguish between the abstract concept of yielding control from the coroutine and the specific yield operator, which is one of two operators which can yield control (the other being await).

I’ll try to apply it more consistently only to those words where the syntactical meaning is intended.

  • When a generator yields, it yields control (and we advance in a statemachine), then continue at the caller.
  • When an async fn awaits, it yields control repeatedly until a designated other async coroutine has finished for its result, and runtime continues anywhere meanwhile.
  • When an async generator yields, it yields control and supplies a result and runtime may continue anywhere, which may not be the caller (unlike sync generator) who is waiting to be resumed.

The last point seems semantically significant, as it seems to be another operation for yielding control from an async fn, in contrast to

It surely doesn’t help that any classification or term needs take into account the temporal aspect of is as referring either to ‘rust nightly’, ‘rust stable’ or abstract concept.

Your description of both await and yield in an async generator is incorrect.

await yields repeatedly in a loop, polling the underlying future to completion. However, it yields control directly the caller, who must poll again to keep making progress toward finishing that loop. It’s very important to understand Rust’s poll based futures model to discuss the semantics of these operations.

yield in an async generator works exactly the same as in a non-async generator. It yields exactly once, to the caller, with the value passed to the yield operator.

1 Like

I got that from the macro itself (precise would have been: ‘yields control repeatedly’), but what differences does that make to the awaiting async fn when it is apparently not meant to be able to customize that behaviour? If there was a way to ‘poll 100 times, then give up, don’t bother anymore about the other future and continue with default return y instead’, sure that is semantically important (albeit fickle and inlikely very controllable). I would have then also understood the desire to call it truly a control flow operator. In the current intended stabilization interface the polling part seems of interest (as in ‘can be influenced’) only to the runtime system, but which will also only know that it happens but can’t further control it (well it does supply the Waker if I understood)?

Are you sure we mean the same thing? I meant that the runtime system may choose to poll some other async fn before continuing with the caller whose value the async generator just yielded. It was certainly not clear to me that such effect was guaranteed and not subject to the runtime system’s choices but I’m not sure if you intended to say this.

It can’t because an async fn/async generator only ever yields control to its direct caller (for both await and yield operations), it’s then up to the caller whether to do something else or continue yielding control up the call-stack to the “runtime system” (if the caller is an async fn that is currently awaiting then it will continue yielding control, but other Future implementations could choose to do something else before/instead of yielding control).

If an async generator yields a value to its caller that is likely a time when the caller would choose to process the value and poll the generator again to see if another value is ready, only once the async generator awaits some internal async operation would the caller likely yield control up the call stack to allow the runtime to poll other futures.

1 Like

In this thread, there seems to be a dispute between people who believe that await is the only thing you can do in an async block that has a special control-flow property, and people who believe that any other async fn called from within an async block has the same property.

I’m not 100% sure about the state of play in Rust because I haven’t been following the larger design carefully, but in other languages with async+await, I can say pretty confidently that the former perspective is correct. Calling an async function, by itself, always gives you back a future, even if you’re already in an async context. This is necessary, because it lets you choose to await the futures in a different order than you created them, or in no particular order. A concrete example might help; I’m deliberately writing it in Python because (a) I know how it works for sure in Python, and (b) it lets me shove the syntax argument completely to the side.

async def do_three_things_simultaneously():
    f1 = start_operation_1()
    f2 = start_operation_2()
    f3 = start_operation_3()
    return await asyncio.gather(f1, f2, f3)

The only suspension point for this function is at the await, even though start_operation_[123] are themselves async functions. You don’t have to worry about concurrent code getting a chance to run before all three futures are created. In fact, the Python implementation could optimize this into an ordinary function that returned the future produced by asyncio.gather without awaiting it first, and no correct caller could tell the difference.

(Despite this, I still firmly believe that, in Rust, await should be treated as a method and written as .await(), because my perspective is that the primary point of async+await is to conceal the state machine that an async block gets compiled into, which means you should think of await as not having any special control flow property. Rather, you should imagine that you are writing synchronous code for a cooperative multitasking environment, in which .await() is the only system call that can block and/or deliver a cancellation request.)

(Hmm, let’s look at translating the above into Rust with either postfix .await or prefix await. Modulo some type annotations, we would have

// RT1, RT2, and RT3 are the return types of start_operation_[123]
async fn do_three_things_simultaneously_postfix() -> (RT1, RT2, RT3) {

async fn do_three_things_simultaneously_prefix() -> (RT1, RT2, RT3) {
    await aio::gather((

fn do_three_things_simultaneously_in_caller() -> Future<(RT1, RT2, RT3)> {

All three are formally equivalent. I think it is more obvious that the postfix version is equivalent to the in_caller version, I think it is more obvious that await is a potentially expensive operation in the prefix version, and I think postfix .await() would communicate both things simultaneously.)