Explicit future construction, implicit await


A call to a normal function that returns a future runs its entire body and produces a value of its return type. Just because its return type is a future representing some incomplete work, does not mean that the function call itself is lazy.

(This is in contrast to the main RFC, where you can call a function but not run its whole body, and get back a future even though the function’s declared return type is something else.)

I find this characterization rather unhelpful. I do not consider Rust “one more Kotlin or Go,” so I’d ask that you not put words in my mouth.

More generally, I realize there is disagreement over whether implicit await is worth its downsides; that’s to be expected. So let’s make our cases without trying to divine some kind of bad intentions in each other, and without demanding that things be inserted or struck from each others’ arguments.

This is spot-on. The only difference from your example is that you can’t call the async functions measure_time or await from the sync function main. To enter the async world you need to explicitly construct a future and run it on some executor:

fn main() {
    // This doesn't do anything. It just creates a future
    let future = async { measure_time() };

    // ERROR: This *would* do stuff if we were already in an async function:
    // measure_time();

    // ERROR: This *would* do stuff it we were already in an async function:
    // futures::await(future);

    // This submits the future to an executor and waits for it to complete:


TL;DR I go on a mad sleep-deprived rant musing about async

If it helps comprehension, it might be worth it to replace async { ... } with async || { ... } to make clear the deferred execution, which then makes the main goal of the RFC (clearing up the lazy/eager semantics of when the async fn gets run) really clear. There could then be an impl<T, F> Future<T> for F where F: AsyncFn() -> T or whatever to make it actually useful.

This is not necessarily a suggested direction for Rust to take; this is more just an exploration of what implicit await could look like in a rust-y world.

Clean slate, here’s the trait taxonomy that I think would clearly lend itself to this RFC’s intent. (But completely ignores the pinning problem.) @rpjohnst, correct me if I missed anything egregious.

// The Fn traits, but with "rust-async-call" calling conventions
// "rust-async-call" handles CPS or task::Context polling or however async is implemented

#[lang = "async-fn"]
trait AsyncFn<Args>: AsyncFnMut<Args> {
    type Output;
    #[unstable(feature = "async_fn_traits", issue = "0")]
    extern "rust-async-call" fn call(&self, args: Args) -> Self::Output;

#[lang = "async-fn"]
trait AsyncFnMut<Args>: AsyncFnOnce<Args> {
    type Output;
    #[unstable(feature = "async_fn_traits", issue = "0")]
    extern "rust-async-call" fn call_mut(&mut self, args: Args) -> Self::Output;

#[lang = "async-fn"]
trait AsyncFnOnce<Args> {
    type Output;
    #[unstable(feature = "async_fn_traits", issue = "0")]
    extern "rust-async-call" fn call_once(self, args: Args) -> Self::Output;

That’s it. This is enough for async fn with implicit await by itself:

async fn run() {
    let left = async || { get_left() };
    let right = async || { get_right() };
    let [left, right] = async::all([left, right]);
    let output = process(left, right);
    println!("{}", output);

And even enough for moving from a sync context to an async context:

fn main() {
    let task = async || { run() };

The operations required here are:

  • Calling an extern "rust-async-call" fn in a sync context is not allowed
  • Calling an extern "rust-call" fn in an async context runs it as normal for sync context
  • Calling an extern "rust-async-call" fn in an async context runs it to completion
    • This means that a NotReady response from the called fn bubbles up through the caller
  • async || { ... } creates a closure which runs its internal code in an async context
    • This automagically implements the correct AsyncFn trait
      • So it can be called from within an async context
    • It is clear that no execution of code has happened yet
  • The standard library provides a block_on sync fn for driving an AsyncFnOnce()

The initialization pattern still works:

// only arg1's lifetime is captured
fn foo<'a>(arg1: &'a str, arg2: &str) -> impl AsyncFn() -> usize + 'a {
    // do some initialization using arg2
    move async || {
        // asynchronous portion of the function

async fn main() {
    let a1 = &*"a1";
    let f;
        let a2 = &*"a2";
        f = foo(a1, a2);
    foo("bar", "baz")();

I don’t know enough about how async/await/futures are implemented to go the next step into Future or Async or whatever you want to call the platform on which async fn are made to work. I think the important part of this formulation is actually that it is independent from whatever machinery powers an extern "rust-async-call". In an actual implementation it would be built on top of CPS or Future::poll.

If a task really needs to be executed sequentially, with a guarantee that none of the functions it is calling are async and will cause a suspension, then it should be a sync function and not an async one, I do think.

Being completely honest here, I do not really understand much, if any of how async/futures are implemented. I feel like I have a handle on using futures, but only just. I’ve probably overlooked multiple things in the draft above, and maybe breezed over important details. It doesn’t help that I’m finishing this up around 3:30 in the morning.

The biggest pitfall in the main RFC is that let x = (async || { ... })(); doesn’t run ..., it creates a delayed execution blob and gives that to x. It does not help that in many (most?) other languages that use async/await, calling an async fn, more importantly than the fact that they eagerly run to the first await, also get scheduled. In JavaScript, a Future<T> means that the computation to get your T is going on. In Rust, a Future<Output=T> is some deffered computation, the same way a closure is, which needs to be polled/ran/awaited to get at its output.

I don’t think the “lazy”/“eager” futures confusion is about whether an async function runs to the first await (though it can be). I really think that the main confusion is whether an async function is started (and thus the executor could continue it when it has a free moment) before you’ve explicitly decided to wait on it having a result.

The main benefit of this proposal is that this isn’t a pain point. Deferred processing is explicit, and a function call is a function call. If you’re in an async context, you can call into some asynchronous subroutine. And an asynchronous routine can tell the executor that it’s waiting on some other routine or outside resource, and to do some other work in the meantime.

Overall, though I think this is an interesting design space, I don’t think it’s right for Rust anymore. I don’t know how exactly to express it, but the core teams’ RFCs seem to fit better. The way of processing async discussed here feels higher level than the zero-cost that Rust is built on top of, now.


Exactly. Futures in Dart (even in Dart 1) did always get started (-> “eager”). They just went from “eager + async start” to “eager + sync start until first await” (like in JavaScript, you could call it “double eager”) The confusion° with Rust’s futures comes from them being truly lazy.

° can be solved by marking futures with must_use. Thus, forgotten awaits trigger a warning and there is less°° confusion because it can’t happen that the code isn’t executed. @rpjohnst’s proposal says that implicit await helps with the confusion because “Whenever you see some_function(…), it will run to completion regardless of whether some_function is sync or async.”. We can achieve the same by not forgetting the await.

°° @rpjohnst’s implict await proposal marks more clearly through async {} blocks where code isn’t executed immediately. In other words, the laziness is marked more clearly. But, this lesson cannot be learned from Dart because they don’t have and never had true laziness and there are also no async {} blocks in Dart! Nevertheless marking the laziness has some appeal. (It requires a lot of typing, though, and the lack of marking hasn’t been a problem for me in JavaScript)

According to my own testing the change in Dart 2 didn't happen (yet). async functions in Dart 2 still don't execute up to the first suspension point synchronously (as of Dart version 2.0.0-dev.49.0). Expand this section if you want to see how I tested this. If I did anything wrong, tell me via PM (I don't want to get off-topic)
import 'dart:async';

foo() async {
  print('1'); // Should run synchronously in Dart 2 but does not?
  await new Future.delayed(const Duration(milliseconds: 1));
  print('3'); // Runs asynchronously

main() async {
  var future = foo();
  await future;

  // Expected if it is implemented: 1, 2, 3
  // What we get: 2, 1, 3
  // => Not implemented

What /r/munificient is describing in the Reddit post is the same as how it works in JavaScript. The equivalent of the code above in JavaScript:

async function foo () {
  console.log('1') // Runs synchronously
  await pause(1)
  console.log('3') // Runs asynchronously

;(async () => {
  var promise = foo()
  await promise

  // What we get: 1, 2, 3

function pause (duration) {
  return new Promise(resolve => setTimeout(resolve, duration))

Maybe they’ll switch to “double eager”, maybe not. But, they didn’t yet.


IMO this double function call looks strange. It’s necessary, but looks quite strange.

@rpjohnst’s proposal also requires a different notation when the initialization pattern is used.

It’s great that you bring up the initialization pattern right away, though. @Nemo157 said about the initialization pattern: “the main reason I have for wanting separate construction and execution phases is for dealing with data conversion and temporary borrows”. It’s easy to see why this makes it important. Consequently, it’s important that its use is intuitive.

I like the intention of wanting to emphasize the similarity to closures which also don’t run immediately. But, this syntax is probably not so practical. Nest this in an actual closure and it looks rather overwhelming:

let make_task = || async || { run() };
let task = make_task();

let make_task2 = |x| async || { run2(x) };
let task2 = make_task2(42);

It’s IMO not worth the gain. Also, the || need to be always empty because it’s just a notation and the system expects no parameters.

Acknowledged. It’s nice though, that you’re bringing fresh ideas to the table!

The core team’s RFC needs to add less new parts to the language to achieve a similar goal. The beauty lies IMO in its simplicity.

I can tell you right now that @rpjohnst will take issue with his proposal being not associated with zero cost :slight_smile:


I was actually really excited about this idea. Why do we need async blocks at all in this proposal? Suppose we just have async || { /* some body */} which generates a struct implementing FnOnce() + Future. The implementation of () can simply be futures::block_on(self).

I believe that would obviate the need for async blocks and it would be more obvious about intent and it allows my_async_closure() to behave like calls on other closures!


That idea came up in the main RFC thread and was decided against (@aturon’s “partial application” framing). It breaks down when you look at async closures that take arguments, because it starts “mixing layers.” Async closures can’t implement Future in general, because they don’t have their arguments yet.

It breaks down even further under this proposal, where async blocks are the only way to obtain a future for async code. Special-casing zero-argument async closures could work (as it could under the main RFC), but it’s not great- especially if its FnOnce impl uses block_on instead of just constructing the future, which would mean some_async_closure() blocks the event loop instead of being awaited.

It is interesting that async || { .. } looks more like something whose body is deferred. And @CAD97’s point of decoupling the async calling convention from the Future trait is also interesting- it would allow for things like the CPS transform I mentioned as a future expansion. But that can be done without removing async blocks.


block_on() is also not the only executor you could choose. There is I think no obvious default choice that works for all use cases. For example spawn_with_handle() is also great, it runs the future on another thread from the pool.


I like the proposal.

One thing that caught my attention, though, is how async { ... } seems to be the main semantic difference from Kotlin. In Kotlin, when you want to create a fresh coroutine context, you would launch { ... }. The main difference is that launch's argument would start executing immediately, as opposed to on demand like in this RFC. Kotlin’s way seems better precisely because of the problem that has been mentioned many times already: it’s easy to forget that you need to start it with block_on(future) or some other way. Ironically, at least one person has already been confused by that even in this thread. So I would prefer to keep the semantics faithful to Kotlin in this instance by making async { ... } start execution immediately. If it is indeed required to make it possible to get a reference to the future before starting it, it might be better to introduce some other way of doing it, e.g. a macro like future!() or even a specialized keyword like future { ... }.

Perhaps it’s even better to get rid of async {} form entirely and, following Kotlin’s lead, replace it with two primary entry points: launch {} for asynchronous computations and do_blocking {} for synchronous.


Rust has already pretty much decided that its Futures are lazy in the sense that they don’t do anything unless polled/awaited. There are benefits to that, one of which is simplicity of cancellation, one of which is agnosticism to the executor, one of which is ability to run without allocating, and most of which you can read more about over at the main RFC for futures.

In a higher level language, I don’t disagree that immediate scheduling makes for a better environment (for at least some presentations). But it doesn’t fit Rust’s futures model. That’s why my exploration focuses on the deferred behavior and making that obvious.


To be a bit more specific, from the point a Rust future is first polled, it is allowed to contain references to itself. (This is because the future is essentially a stack frame, and stack frames can contain pointers to other values on the stack frame.)

However, a self-referential value cannot be moved, because those pointers would still point to its old location. This means a future must be moved to its final location in memory before it begins execution.

The problem, then, is that a launch { .. } construct would have to be the thing responsible for selecting the future’s final location. If every individual future were always heap-allocated, this would be fine- launch could do the allocation and all would be well. In fact, this is how C++ coroutines work.

But in Rust, we don’t want every individual future to be a separate allocation- while top-level futures will usually be heap-allocated, sub-futures should be stored directly in their parent futures. And in #![no_std] applications, top-level futures will be stored somewhere other than the heap.

So while I do like the way Kotlin handles things with launch and async, I don’t see how that could be made to work in Rust without losing out on the ability to a) borrow local variables in async fns, or b) combine a whole tree of futures into a single allocation.


Perhaps async blocks should be implicitly marked as #[must_use].


@rpjohnst’s proposal introduces async {} blocks as a “syntax for constructing a future” (like in the main RFC). Consider voting for must_use for futures in my comment in the RFC.


Shouldn’t both the definition and the usage be explicit?

Yeah, sure, but

  1. await is not a normal synchronous function call.
  2. in async code, most functions are expected to be async, so marking the exception (quasi-blocking) is a Good Idea™.

This, this this. Especially not hard-wired into the core language. Granted, there are people who might find it easier/nicer than explicit futures. That in itself is not a very strong argument to compensate the burden of a (major) new language feature.


Some thoughts on possible solutions.

First of all, let’s separate the two cases of spawning a fresh executor context and achieving concurrency inside an existing context. In Kotlin, the former is called launch, the latter async, but async in this RFC means something else (when used as a modifier in function declarations), so I will use spawn in place of Kotlin’s async. According to you, it is okay to allocate launch's memory separately. So we only need to figure out how to manage spawn's memory.

While I don’t have a well-formed idea on how to solve it, here are some questions:

  1. Why can’t we implement self-pointers in a special way that would update them whenever self is moved? That would obviously introduce some overhead at runtime, but would not require external allocation. Of course, the language and/or standard library would have to be extended in some way to support it. So I realize it’s not simple and I’m not saying it’s necessarily a good mechanism for spawn, but has the idea been seriously discussed?
  2. Can we allocate the memory non-locally in order to support the majority of use cases? E.g., we could allocate it in the stack frame corresponding to the outermost lexical scope, i.e. the function scope. This is an idea, I’m not sure it’s really doable. Certainly becomes more difficult if spawn is called inside a loop or from a recursive closure. So this is just a starting idea for conversation.
  3. Can we require the user to provide us with some way to allocate memory for every call to async? E.g.:
struct JobsPool { /* vec-based ... */ }

trait JobAllocator<T> {
    fn alloc(&mut self, future: Future<T>);
impl JobAllocator for JobsPool {
    /* alloc() creates a job and pushes it to the vec ... */ 

async fn foo() {
    let jobs = &mut JobsPool::new();
    for i in 0..10 {
        // this form of `spawn` takes a mutable reference to `impl JobAllocator` as an argument
        spawn(jobs) { /* ... */ }


Yes, offset pointers for movable self references have been discussed before (though I don’t have any links right now). They’re basically confirmed a non-starter because Rust relies on the fact that moving a structure is just a memcopy – there is no such thing as a move constructor in Rust.

Because once a future is started it must be pinned, the only way to make combinators zero cost is to allow you to move in the child futures, thus they cannot be started yet. Thus we can’t start futures until we’re ready to drive them to completion.


And before anyone asks “why don’t we add move constructors so offsets can work”, regardless of all the problems move constructors introduce, that still wouldn’t give us a general solution to the generators problem. See the “Offset-based Solutions” section of https://boats.gitlab.io/blog/post/2018-01-30-async-ii-narrowing-the-scope/ (in fact, just read that entire series of posts while you’re there)


An interesting discussion of this issue is happening in the structured concurrency discourse: https://trio.discourse.group/t/structured-concurrency-in-rust/73/10


I’ve just seen this proposal for the first time (through the structured concurrency forum link).

I actually think the proposal has buried a very important aspect, which wasn’t really discussed: The different between async functions with run-to-completion semantics and the currently proposed futures. This aspect is somewhat orthogonal to the whether awaits should be implicit or explicit.

I have elaborated a bit on the structured concurrency forum on what is the difference between those things are, and what kind of zero-cost-abstractions it enables.

What seems to be missing in this proposal here seems to be a manually implementable Future type, that also provides run-to-completion guarantee, in order to exploit the system manually. Potentially that you would require two Future types:

  • One that has also run-to-completion semantics, is always pinned and will never get dropped before Poll::Ready.
  • An always cancellable one, which might work fine without Pin.


I don’t see how you could guarantee (in a typesafe sense) that a future will be run to completion since the executor not only must not drop it, but also must continue polling it for it to make progress. Such a guarantee can’t be encoded in any kind of adapter type, as far as I know, only directly on the executor’s spawn API (which would just assert through documentation that it will continue polling the future it receives until it the future is ready or the program halts). Most executors guarantee this already, of course, so I assume you mean some sort of WillComplete adapter, which I don’t think is possible.

Also its important to note, in regard to your longer post you linked to, that futures can only be cancelled at points where they yield a NotReady, allowing the implementer of each future full control over the atomicity of its operations. And while its not guaranteed that a hard cancelled future won’t be leaked all well behaved combinators and executors will drop futures they stop executing, allowing some amount of clean up on cancellation in correctly implemented code.


FWIW (and IINM), I don’t think “hanging onto it without polling it” is a safety issue any more than the possibility that the OS scheduler might completely starve any given thread? It doesn’t cause UB or even inconsistent-state issues, just… other kinds of problems.