Coroutines and rfc-243 (error handling with ?)


I’ve been reading about the upcoming co-routines feature in C++17 and I’ve stumbled upon the following pdf presentation which explains implementation details which is a fairly recent version of the proposal.

Page 52 of that presentation makes an interesting observation - the same mechanism (the await keyword) can be used in a more general setting (quoting from that page):

await: unwraps a value from a container M*

return: puts a value back into a container M

Future: container of T, unwrapping strips temporal aspect

optional: container of T, unwrapping strips “not there aspect”

expected: container of T, unwrapping strips “or an error aspect”

std::future: unwrapping strips temporal and may have error aspects

The “expected” type reminded me of the Result type and made me think about RFC-243 - the (long) debate over this RFC was whether the proposed syntax can be replaced by a more general “do” notation as in Haskell but the problem with that is that it has poor interaction with procedural code (control flow, early returns).

Could co-routine syntax be useful in this context for Rust? could a co-routine replace the do-notation or the proposed catch block?


The general “design pattern” behind all those (and many more, e.g. std::unique_ptr<T>, T*, std::vector<T>) is called Monad.

The problem it solves (at least for the types you have “discovered”) is that of reusing code that works on T on types that wrap a T (e.g. future<T>, optional<T>, vector<T> are types that wrap T).

If you implement the Monad interface for any of those wrapper types, you can reuse all the code that works on T on those wrapper types for free.

For this reason I think that Monad is the most useful design pattern ever. It basically tells you how to write reusable and composable code. Surprisingly to me, a lot of people do not find this insight useful at all when writing imperative code.

Anybody following C++ development knows that a lot of monads are being proposed. Each of them has its own different syntax/API/interface. Knowing how to use one doesn’t really ttransfer to using another: e.g. if you know how to use optional<T> or future<T> you basically cannot transfer any of that knowledge to expected<T, Exception> even though they are extremely similar because their APIs use completely different function names for the monad operations.

This is why I am against the ? RFC. On current Rust it cannot be done much better, but that RFC basically solves this problem for a particular instance of a monad (Result), and potentially makes it harder to solve this problem properly in the future. What if I want to use ? on Option or Rc or Arc? I am out of luck.

Hand waiving a lot a plausible path to solve this problem is to: 1) Design/implement HKTs, 2) Add a Monad Trait and implement it for all monads, 3) Add syntax sugar for the Monad trait.


Higher kinded types are regularly mentioned as a must-have feature in the future for multiple cases: better abstractions/refactoring, monadic types, collections/iterators, concurrency primitives, etc. Generally they come as a response to a thread about a more specific topic: “HKT are necessary for what you want. So let’s wait until Rust has them”. The problem is … unless I’m mistaken I have not found any design/proposal for this (arguably complex but long-awaited) feature. Isn’t it becoming a really high priority on the Rust agenda ?


Perhaps I need to clarify a little:

I did know about the Monad abstraction before “discovering” that page. And I agree totally with what you said about wanting HKTs and do-notation in Rust instead of the suggested ? operator.

However, the problem with the Haskel style do-notation is that it doesn’t mash well with imperative code.

The novelty with the async/await and coroutines AFAIU is that they provide a different kind of do-notation, that works better for imperative code.

The main difference is that in Haskel the do-notation is syntax sugar for wrapping a value (sort of) whereas the await operator provides unwrapping.

let’s look at a concrete example:

fn foo() -> Result<T, E> {
    let x = try!( f1() );
    let y = try!( f2() );
    Ok( goo(x, y));

becomes a co-routine such as:

fn foo() -> Result<T, E> {
    let x = await f1();
    let y = await f2();
    return goo(x, y);

This looks like the same semantics - the await introduces a suspension point. This means that if the result of say f1() isn’t “ready” the control returns to the caller in our case (like a more principled throw statement), similar to how try!() generates an early return for the error case. In contrast, the Haskel do-notation wraps the value in a closure. So this goes up the call stack whereas do-notation goes down the call stack.

Important thing to note about the proposed c++ design is that the compiler lowers the co-routine to a protocol defined by some known “trait” (concept) and that library writers can implement this trait for their types in order to plug into this mechanism. Honestly this is a very rustic approach - think lang-items and the Iterator trait for instance.


One thing worth mentioning is that the C++ implementations implement this by generating a state machine, whose state must be stored somewhere (e.g. on the heap). Where it is stored can be configured by implementing a global operator new for a type that models the Resumable concept, but those against the co_await/co_yield/co_return proposal argue that it is not a zero cost abstraction, and that the right way of doing this would be to introduce stackless co-routines at the language level, and then implement stack-full coroutines as a library, so that how and where they are allocated feels “less hacky”.


Real coroutines would have their own stack. The async/await style coroutines use the heap to simulate a stack. But I’m learning that this can be a lot more efficient when there are a lot of outstanding execution points (i.e. a lot of waiting coroutines), because a real coroutine stack keeps pages in memory that have been touched by temporary stack allocations. The state held by an await can be a lot more limited and optimised, e.g. to just those variables which are ‘live’ at that point in the code.

Looking at Pony which is an actor language, one huge advantage is that it has only one machine stack per core. So there are no pages hanging around from deep call-chains on old coroutine stacks, because the stack space is being forever reused. Real coroutines are not possible in this runtime, for language safety reasons (provably it cannot be made to work). But it seems that async/await could work (although they don’t want to add it), as it is just a more convenient way of writing a state machine (for some subset of all state machines) and that’s all an actor language is – a huge collection of state machines.

I wonder whether the safety considerations that exclude real (stack-based) coroutines from Pony translate to Rust as well? And how the borrowing rules would affect what is possible to write as translated async/await-style code?


The claim about being zero cost abstraction is more nuanced than that AFAIU. First off, the proposal is for stackless coroutines so the amount of allocation is much smaller (just current frame’s local state) and for async I/O for example there is some OS context/buffer which is allocated anyway which can be co-opted for this mechanism.

I don’t understand the part about “language level” - the proposal is for language supported stackless coroutines.


Real coroutines would have their own stack.

That’s confusing interface with implementation. Coroutines are a generalization of the abstract concept of a routine (a function). They add two additional actions - “suspend” and “resume”. Stackful coroutines are an implementation of this interface via fibers.


Real coroutines run normal unmodified code. They don’t need code annotations for pause-points, i.e. don’t need to be marked with async/await (C#) or ‘async’ or ‘future’ or whatever else is used in other languages. So the interface is different too, not just the implementation. Lua has real coroutines, although the switching between coroutines is slightly limited (i.e. it is yield/resume not arbitrary jumps between stacks). However, I have been persuaded that the async/await style despite appearing more clunky is the best way forwards. It is explicit for a start, and also it makes it easier to reason about the resulting restrictions on the surrounding code (in terms of borrow checking for example).


I don’t know where you take that definition from since coroutines were invented at the time when programmers wrote assembly language with jumps and other low level assembly methods which do not apply to any of the examples cited (C#, Lua, etc…)

see -

According to Donald Knuth, the term coroutine was coined by Melvin Conway in 1958, after he applied it to construction of an assembly program.[1] The first published explanation of the coroutine appeared later, in 1963.[2]

But anyway, I don’t want to continue this debate of definitions. We already agree that stackless coroutines are better suited for Rust. Let’s return to the topic at hand - whether this technique can be employed for error handling and does it obviate the need for a “?” operator.


Yes, I learned coroutine coding tricks from writing Z80 assembler as a child, and what I refer to as a “real coroutine” (with its own stack) is what Knuth is talking about. Lua does what you’d do in assembler, i.e. switch between the top of two stacks. It doesn’t do any translation to state machines or anything like that.

You’re missing a fundamental point if you don’t understand the difference. If I have function A which calls B which calls C which yields, with “real coroutines”, A B and C are all normal functions. But in the async/await model, A B and C all have to be async functions, and the callpoints from A to B and from B to C have to be labelled with ‘await’. All three functions have to be translated to state machines.

The ‘awaits’ spread through your code like a virus, i.e. you need to design it in from the start knowing the consequences. For an actor-model language like Pony, you’d be coming the other way, from everything asynchronous, to async/await giving an illusion of synchronous execution. (Although it seems they don’t want to add async/await.)

The open question which I am not qualified to answer is how this would interact with the borrow checker. Well? Or badly?

EDIT: I should probably have been more diplomatic. Others have been kind to me here when I’ve come up with wild ideas, or not understood something fully. Please go ahead and explain how you see this working in place of the ‘?’ operator. I haven’t followed those discussions, so can’t comment on it directly.


I can’t resist posting this beautiful Haskell snippet :slight_smile:

-> % ghci
GHCi, version 7.10.3:  :? for help
Prelude> length (92, 62)

It’s not exactly about monads, but it shows how HKT based abstractions can work in practice.

(The source of the snippet is

closed #13

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.