On why await shouldn't be a method

It doesn't now, but the consensus of the thread specifically about that seems to be that it should, so I am assuming that change will be made.

I understood you to be writing deliberately contrived code in that post. Yes, in principle field access can run arbitrary code and block, but that’s not what Deref is for.

  • .await does introduce a branch into the surface control flow. It must do this because the async fn returns a Future , and the Future 's poll method must return to the Executor. It cannot synchronously block. Thus, when a .await happens in the code, there is an implicit return (just like ? ), and an implicit loop.

Saying this indicates to me that either you do not understand the distinction I am trying to make between the “surface control flow” and the “state machine,” or you reject it. Let me be excruciatingly precise about what I mean so we can rule out the first possibility.

Here’s a simple async function. Just to be stubborn, I’m writing it with my preferred syntax.

async fn a_then_b(Context& ctx) -> Result<Data, IoError> {
    let a_output = do_a(ctx).await()?
    let b_output = do_b(ctx, a_output).await()?
    postprocess(b_output)
}

The surface control flow of this function is the same as the control flow of a hypothetical synchronous version:

fn sync_a_then_b(ctx: &Context) -> Result<Data, IoError> {
    let a_output = sync_do_a(ctx)?
    let b_output = sync_do_b(ctx, a_output)?
    postprocess(b_output)
}

The only early returns in the surface control flow are due to the ? operators. Early returns due to await exist only in the state machine, which is something not entirely unlike

enum AThenBState {
    BeforeA(future_a: DoA)
    BeforeB(future_b: DoB)
}

struct AThenB<'a> {
    state: mut AThenBState,
    ctx: &'a Context
}

fn a_then_b(ctx: &'a Context) -> AThenB<'a> {
   AThenB { state: BeforeA(do_a(ctx)), ctx: ctx }
}

impl Future<Output = Result<Data, IoError>> for AThenB {
    fn poll(self: &mut Self) -> Poll<Self::Output> {
        match self.state {
            BeforeA(future_a) => {
                if let Poll::Ready(a_output) = future_a.poll() {
                    match a_output {
                        Err(e) => return Poll::Ready(Err(e)),
                        Ok(a_output) => {
                            self.state = BeforeB(do_b(self.ctx, a_output));
                            return Poll::Pending;
                        }
                    }
                } else {
                    return Poll::Pending;
                }
            },
            BeforeB(future_b) => {
                if let Poll::Ready(b_output) = future_b.poll() {
                    match b_output {
                        Err(e) => return Poll::Ready(Err(e)),
                        Ok(b_output) => {
                            return Poll::Ready(Ok(postprocess(b_output)))
                        }
                    }
                } else {
                    return Poll::Pending;
                }
            }
        }
    }
}

Skimming the docs for futures-rs gives me the impression that it would be even more complicated than that in reality, but this should be sufficient for illustration. Details of how you write an impl Future by hand are not the point. The point is first that await’s early returns exist in the state machine and not in the surface control flow, and second that the state machine is a big ball of hair that you don't want to have to think about most of the time. In fact, it's so hairy that thinking about it is liable to confuse you into writing bugs.

It creates a branch in the state machine, but this is invisible in the surface control flow. How is that any different from saying that ? creates a return which is invisible to the surface control flow? In both cases they’re creating implicit return s.

The difference is what you have to be aware of, in order to understand what the function does. The ? desugar turns sync_a_then_b into

fn sync_a_then_b_desugar(ctx: &Context) -> Result<Data, IoError> {
    match sync_do_a(ctx) {
        Err(e) => Err(e),
        Ok(a_output) => match sync_do_b(ctx, a_output) {
            Err(e) => Err(e),
            Ok(a_output) => Ok(postprocess(b_output))
        }
    }
}

and you have to know that in order to understand the function. Therefore, the early returns created by ? are part of the surface control flow.

By contrast, you do not have to be aware of the state machine in order to understand what await does, so its early returns are not part of the control flow.

To be clear, the “blocking” that an .await does is very different from the normal synchronous blocking (e.g. synchronous I/O).

It’s different both from a mental standpoint, and also very different from an implementation standpoint.

Yes, it is very different in implementation, but I do not agree that it is very different in terms of the mental model that an end programmer needs to have. In fact, I think an end programmer—by which I mean anyone who isn’t involved with writing the executor itself—should use a mental model in which async functions are running synchronously, in a cooperative multitasking environment in which await is the only blocking system call.

I think this because my experience with writing async code myself and with debugging async code written by other people (specifically in Python and JavaScript) says that this is the most useful mental model for bug finding purposes. For concreteness, look at the bug described by @theduke (more details) over in the other thread. The key insight they needed to have, in order to resolve the bug, was that an await in the wrong place caused their coroutine to block waiting for the wrong (combination of) events. This insight would have been easier to come to if they had been constantly aware that await is a suspension point. It would have been harder to come to if they had been constantly aware that await causes an early return within a complicated state machine that is created by async.

Rust likes to make the programmer aware of low-level details

I’m going to point at withoutboats’ third requirement for zero-cost abstractions here:

Improve users’ experience: The point of abstraction is to provide a new tool, assembled from lower level components, which enable users to more easily write the programs they want ot write. A zero cost abstraction, like all abstractions, must actually offer a better experience than the alternative.

Not having to be aware of the state machine is what makes async/await a valuable abstraction over raw futures. But if we go too far, and make it too easy to forget that await blocks the surface control flow, then it stops being a better experience again.

Have you or some other team member written out an explanation of why you do see it as unnecessary noise? Niko wrote a couple very good summary posts in the “A final proposal for await syntax” thread: […]

Thanks, I saw these go by but had forgotten what they said due to the length and speed of the discussion.

You will not be surprised to hear that I very much agree with the observation that

since I just spent a bunch of time arguing that that is the correct intuition for people to have, that it facilitates debugging in a way that no other mental model achieves; and that I think Niko is wrong when they say, a little further down,

When writing Async I/O code, I imagine one has to do a lot of awaiting, and most of the time you don’t want to think about it very much.

My experience has been that you do have to do a lot of awaiting and you do want to think about it every single time, because “the scheduler may choose to run another thread here” translates directly to “so you better be holding the correct set of locks at this point, and you better be maintaining all the invariants observable by others, and you better have chosen the correct thing to wait for.”

8 Likes