Async/Await - The challenges besides syntax - Interfaces

Inspired by the all the recent discussions about async/await and the experience reports survey, I decided to write up some of my impressions and experiences about the current challenges with async/await (apart from syntax).

Most of the content might already be known to people who had been tightly involved with the async/await development effort. However I decided it might still be worthwhile to write things up for people who are interested in async/await, but haven’t yet tried it out in person.

This is a first writeup about the state of traits/interfaces, which is probably a topic that most people will get in touch with very fast when exploring async/await.


Great writeup! Thanks for putting that together.

Along the same lines, it is not very easy at present to actually parallelize code using await. Consider the following synchronus function:

fn compute_foo() -> Foo {
  let a = a();
  let b = b();
  if b.can_foo() {
    return Foo::new(a, c(b));

Let’s imagine that a() blocks for 20ms and b() and c() each take 10ms. The way the function is written it would take 40ms to run.

Using async/await we could convert a(), b(), c() to be async. Then in theory we should be able to run this function in 20ms.

But to do that where do you put the awaits? If you naively write a().await, b().await, and c(b).await it won’t improve the situation at all, and still take 40ms.

So one might think, ok let’s just delay calling await until the value is needed. This is how you would solve this problem in any other language. However because Rust does not poll the future right away and does not start the work until the future is polled, this will cause the function to also take 40ms. Worse, there will be nothing to indicate to the user they are doing anything wrong. Their code will just silently be slow.

Looking in the futures crate there are a bunch of functions. The one that jumps out as being helpful is join. It’s possible to insert a join(a, b).await right before the if. This would allow a() and b() to run in parallel. But it’s bad for a few reasons. First it’s not remotely obvious what this is for, I could see someone deleting it thinking they were optimizing the function. Second, it doesn’t actually express the data dependency of the function, instead it artificially requires a to complete before the if. Finally, it doesn’t actually solve the problem, because it only lowers the function time to 30ms, when it ought to be able to be run in 20.

1 Like

My understanding is that in every language with async/await, whenever there’s a clear opportunity for concurrency and you can tell there’s no risk of it creating race conditions, you’d simply not use the async/await sugar and fall back on whatever it’s sugar for, which in Rust’s case means Future combinators. I think your specific example would only need an and_then and a join (after the if, not before) to be optimal, and you could write it Future::join to make the asyncness more obvious (although this really is need-to-know stuff in every language I know with async/await sugar).


If I am interpreting that correctly, it would not be optimal. Can you write it out?

It is different in Rust in that a() is not going to start running just because it was invoked. So while in C#, Python, or JavaScript there is a very clear way to parallelize things (just delay the await) that doesn’t work at all in Rust, and there isn’t a clear alternative which does in this case.


In C# the tasks can, but don’t have to, run in the background as soon as they are created, depending on the settings of the task scheduler. Awaiting them passes the continuation to the local equivalent of and_then. Awaiting multiple tasks is done via a helper method on Task.

Rust could emulate this behavior with something like Tokio’s spawn function, which would use a pool of background threads.

1 Like

Hm, yeah, “after the if” wasn’t the right way to put it. I guess the simplest way would be to “just delay the await” as you said; create all the futures then join them all at once so the final step can be purely sync:

fn compute_foo() -> Future<Foo, ???> {
  let futA = a();
  let futB = b();
  let futC = futB.and_then(|valB| c(valB));
  join3(futA, futB, futC).and_then(|(valA, valB, valC)| {
    if valB.can_foo() {
      return Foo::new(valA, valC);

You can write

async fn compute_foo() -> Foo {
  let a_fut = async { a() };
  let c_fut = async {
      if b.can_foo() {
      else // this was missing before

  let (a, c) = join!(a_fut, c_fut);
  Foo::new(a, c)

This makes the data dependencies and the parallelism fairly obvious.

However I agree that the fact that futures don’t start running unless polled takes a bit to get used too. I had written things like the following before, in the assumption that it’s running parallel (which is the case in other languages, but not in Rust, where an explicit join! or select! is required):

let a_fut = a();
let b_fut = b();
// Assume both ops are running here. Wait for both
let a_res = a_fut.await;
let b_res = b_fut.await;

But I consider this mainly a “getting used to it” thing.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.