Would implicit `await` really be a bad idea?


I’ve previously suggested implicit await, and more recently suggested non-chainable await.

The community seems very strongly in favor of explicit await (I’d love for that to change, but…). However, the granularity of explicitness is what creates boilerplate (i.e. needing it many times in the same expression).

[Warning: the following suggestion will be controversial]

I’m curious if most of the community would compromise and allow “partial-implicit await”?

await response = client.get("https://my_api").send()?.json()?;

This is something akin to value de-structuring, which I’m calling expression de-structuring.

// Value de-structuring
let (a, b, c) = ...;

// De-sugars to:
let _abc = ...;
let a = _abc.0;
let b = _abc.1;
let c = _abc.2;
// Expression de-structuring
await response = client.get("https://my_api").send()?.json()?;

// De-sugards to:
// Using: prefix await with explicit precedence
// Since ? does not expect a Future, de-structure the expression at that point.
let response = await (client.get("https://my_api").send());
let response = await (response?.json());
let response = response?;

The compiler would need to do a bit more work, but nothing very magical. Iterate through the expression looking for when to split based on the sub expressions both using impl Future or not.

It also extends itself well to work with combinators or multiple futures if we so choose. E.g.

await with_combinator = client.get("https://my_api").send().and_then(parse_json)?;

// De-sugars to:
// Since .and_then expects a Future don't de-structure the expression at that point.
let with_combinator = await (client.get("https://my_api").send().and_then(parse_json));
let with_combinator = with_combinator?;
// If needed, given that the compiler knows these expressions a, b are independent,
// each of the expressions would be de-structured independently.
await (a, b) = (
// De-sugars to:
let a: impl Future = client.get("https://my_api_a").send();
let b: impl Future = client.get("https://my_api_b").send();
let (a, b) = (a.poll(), b.poll());
// return if either a or b is NotReady;

let a: impl Future = a?.json();
let b: impl Future = b?.json();
let (a, b) = (a.poll(), b.poll());
// return if either a or b is NotReady;
let (a, b) = (a?, b?);


That doesn’t seem too bad, as it’s only a compilation error, no? And we already have closures, with explicit type determined by what exactly is going on in their body. Or am I missing anything?

That’s an interesting idea. :slight_smile:

Inspired by id, I wonder if await expression could mean “implicitly await anything directly in expression (does not recurse to closure bodies)”. That’s kind of middle-ground between having explicitly annotate every yield-point and not do it at all, and also between postfix and prefix notation (as chaining containing many awaits is now much easier).

I see that someone even proposed it in the main thread: https://github.com/rust-lang/rust/issues/57640#issuecomment-456209577


@ivandardi: But what would stop anyone from wrapping the whole body of an async function in an await block? :smiley:


Nothing :stuck_out_tongue:

The use case would be if you really didn’t want to run the futures concurrently, as in Async::run_all([future1, future2]). It would only benefit the code when the user wants to chain futures. And even then, if they want to use future combinators, then using an await block would also be moot, because since they’re combined, you’d only need to await the combined future instead. So it’s a specific use case, yes, but it does solve the problem of the postfix syntax decision by not having any and still having the same benefits.


I don’t really support this, but I agree it’s not necessarily incoherent. If you squint, you can see async/await as merely a particular implementation of (green) threading, one where stack requirements are explicitly tracked through all potential call stacks, rather than specifying a fixed stack size (which is usually too big to have a large number of ‘threads’) or dynamically allocating stack frames (which is slow).


I hereby name this the “awaiterator” style. (Sorry, I couldn’t resist)

I think I like it, but there is already some potential confusion between iterators and streams and this could potentially add to that.


Depends on if you rely on the Sendness locally, or just provide it as an API guarantee. If the latter then you’ll have to litter your async fns with

async fn foo(bar: Bar) { ... }

fn _assert_foo_send() {
    fn assert_send<T: Send>(t: T) {}
    let bar = create_bar_somehow();

I guess technically you should be adding this anyway if you intend to make it an API guarantee, and maybe we’ll get something like the “Sugar for bounding the return type” from here in the future.

Closures don’t depend on the evaluation order of their body, they only depend on the environment captured by it. Generators (and therefore async fn/blocks) extend this with a subset of their local variables, those that are live across a yield point.


Why can’t it work for Rust? Couldn’t the behavior be: “any impl Future<T> expression is automatically awaited inside of an async function/block”?

Since async functions return impl Future<T>, that strategy will work for both async functions and also regular functions which return impl Future<T>


Or better (maybe worse for compiler writers),

Inside an async code block, if a Future being awaited will make it type check, then do so. Not awaiting only on otherwise. In other words, if awaiting will fail the type check, try again without awaiting.

I assume this will significantly increase the compile cost though, specifically the cases can quickly increases exponentially.


It would also be a bit confusing to users I guess. Some types would somehow get active by just letting them lying on the floor. Eg. if you have fn x() -> Vec<i32> and do x();, then it is just dropped and nothing happens. If you have fn y() -> SomeFuture and do y();, it doesn’t get dropped, it starts doing stuff. Why some types just die and some types act?


That’s not quite correct: something did happen! In particular, the call to x() happened (which involved some computation).

Similarly, when you call y(), stuff also happens, the only difference is this time it’s asynchronous computation.

So really the only difference between x() and y() is whether the computation is synchronous or asynchronous.


No, with x, the computation happens inside x. With y, something also happens outside of y. Inside y, the future is constructed. It needs to be polled after y terminated.

I think this boils down to these questions:

  • Does an async function return a future, or is it „magical” form of function? I believe the current design is it returns a function.
  • Is a future a value like anything else (like vectors, like errors, like iterators), or something completely different? Again, to me, it seems the current design is that a future is a value.
  • So, to make the auto-await work, you need something to happen on the caller side of the function call. But then, how is that consistent with other ways of calling stuff?

Does that sound like technical details? Maybe it is, but in Rust I think technical details are not usually papered over.


Those are all good points. However:

  1. Rust already has some implicitness, e.g. IO is not represented in the type (unlike some other languages like Haskell).

  2. The fact that the automatic await only happens inside of async means that it is explicit, maybe just not as explicit as some would like.

So you are right that it comes down to how “built-in” should Futures be? Should they just be a completely normal part of the language like function calls and IO?

Or should they be treated as a regular Rust type, and not magical at all?

And would the answer be different if Futures had always been a part of Rust, rather than added later?


I would even argue that awaiting immediately should be avoided wherever possible, and should not be the normal or default operation. Awaiting futures immediately can a recipe for maximizing latency.

The first futures API that I used was an internal library at Amazon back in 2005. The company had developed many best practices for minimizing latency. One of the rules was to schedule futures as early as possible, and await them as late as possible. This means that typically there is a lot of space between where a future is created and where it is resolved.

For example, we would NOT do this. This would cause the parent task to block on the child task before starting its next chunk of work:

let x = await async_work(); // bad
let y = do_some_stuff();
do_more_stuff(x, y);

And we would NOT do this, which requires the parent task to complete the first chunk of work before scheduling the child task (which will then block the parent).

let y = do_some_stuff();
do_more_stuff(await async_work(), y); // bad

The correct way to write the code is this, which allows the child task and parent task to run concurrently up until the point where the child task’s result is needed:

let x = async_work();
let y = do_some_stuff();
do_more_stuff(await x, y); // good

(These optimizations get more complex when a parent task spawns several child tasks, some of which depend on each other, but the principles remain the same. Typically you end up with a bunch of async calls near the start of a function, and bunch of awaits near the end.)

I know these practices are not yet universal, and many programmers do tend to await futures immediately. However, as async programming matures, I think that more people will rediscover these rules. We should not adopt any language design that makes the “good” code above harder to read/write than the “bad” code.


This makes perfect sense in a language like JavaScript where Promises run immediately.

But in Rust Futures are delayed until first polled, so it’s much much harder to preemptively run a Future like that.

Instead, it’s better to make the parallelism more explicit by using join! or similar.

Do you have any ideas about how to do preemptive work in Rust (aside from join!)?


This all applies equally well to Rust futures. Even in the single-threaded case, if the child task is performing IO then the parent task can continue to do work after spawning the async IO task, and delay blocking until the IO result is needed. The executor running the parent task will poll the child task at the point where the parent task awaits it.


No, it doesn’t. This is a rather subtle but major difference between how futures in Rust work to C#/JS (and probably others, but those are the ones I am familiar with). Rust futures do not run in parallel to synchronous work that is being performed within the same task and cannot be expected to make any progress until awaited.

If async_work() spawns off a separate task on the executor and just returns a channel that will be provided with the result of that task, then yes it will run in parallel; but at that point you lose a lot of what makes Rust futures unique, instead of having a highly optimizable state machine you are basically back to the overhead of having a lot of dynamic tasks calling continuations when complete.


Rust futures may not run code in parallel with synchronous work in the same task (except in cases as you note where threads are involved). But they can allow the OS, for example, to perform other work concurrently with the parent task.

A concrete example: If the async work function above is tokio::net::TcpStream::connect, then at the point where it is called, it immediately makes a connect syscall.† It doesn’t block on the result of the call, but returns a future that can be polled for the result. However, the libc::connect call happens when the future is created, so the network stack will begin talking to the remote host at that point, and this happens concurrently with whatever code runs the task runs next.

†Note: tokio::net::TcpStream::connect calls mio::net::TcpStream::connect which calls std::net::TcpStream::connect which calls libc::connect, at least on Unix. (Some intermediate stack frames omitted for conciseness.)


This might be true for some raw Futures (e.g. connect, as you noted), but it’s not true in general, and it’s not true for async fns (which are always delayed).

So as soon as there’s even the slightest bit of abstraction (e.g. an async fn calling connect), things will be delayed.

That’s why I asked if you had any ideas for preemption, since it will be necessary to manually preempt Futures in Rust.

And that manual preemption is necessary with both implicit await and explicit await. In other words, explicit await doesn’t really help much.


Ah, yes. That’s very unfortunate. Sorry, I don’t have any suggestions for fixes, but I agree this needs a solution, in particular if Rust is to be used in an environment like the one I was describing (where handling a single request might require async calls to dozens of micro-services, and latency is a high priority).


My point is, in the current design, with already a lot of tooling and crates around, they are a regular type. To make them otherwise, a step back to a design table would have to be made, a lot of existing stuff thrown away.