Flattening nested futures

Continuing the discussion from A final proposal for await syntax:

I wonder what the thinking is around futures that return futures. How will they be flattened? So far, I believe you’ll have to do computation().await.await if computation() returns a future that will resolve to a future itself.

One really nice feature when working with promises in JavaScript or “deferreds” in the Twisted library for Python, is that you can freely return a nested promise from your callback. This allows you to do things like:

function get_user_data(username) {
    if (local_cache.has(username))
        return local_cache.get(username)  // Returns User object
    return db_user_lookup(username)       // Returns a Promise<User>

People can then use this function on another promise:

var username = prompt_for_username()
var user = username.then(get_user_data)

In short, you cannot tell the difference between

var p1 = Promise.resolve(1);  // Promise containing 1
var p2 = p1.then(value => Promise.resolve(value + 1));
p2.then(value => console.log(value));


var p1 = Promise.resolve(1);
var p2 = p1.then(value => value + 1);
p2.then(value => console.log(value));

Given this feature, await (await something) is basically unnecessary. Has it been discussed if we could do something similar for Rust? The ability to freely inject new nested futures is the code is very powerful since it aids refactoring.

In more concrete Rust terms, I guess what I’m asking is if an async fn foo() -> i32 could also return a Future<Item = i32>? Could there be a blanket implementation so that Future<Item=T> is implemented for all T?

1 Like

In JS this part of Thenables (I believe they’re called that way?) more than once totally confused me and I am not sure it would be a good idea for Rust. Also I have no clue how you would implement this feature in a strongly typed language: make .await somehow magical?


Trying to write get_user_data that way in Rust has issues even if T: Future<Output = T> were possible

fn get_user_data(username: &str) -> impl Future<Output = User> + '_ {
    match local_cache.get(username) {
        Some(user) => user.clone()        // : User
        None => db_user_lookup(username)  // : impl Future<Output = User> + '_

this is attempting to return 2 different types depending on runtime state, so you must use an adaptor (like Either) to merge them into a single type

fn get_user_data(username: &str) -> impl Future<Output = User> + '_ {
    match local_cache.get(username) {
        Some(user) => Either::A(user.clone())        // : Either<User, impl Future<Output = User> + '_>
        None => Either::B(db_user_lookup(username))  // : Either<User, impl Future<Output = User> + '_>

at that point adding an “immediately ready” future wrapper around the cached path isn’t much of a readability cost

fn get_user_data(username: &str) -> impl Future<Output = User> + '_ {
    match local_cache.get(username) {
        Some(user) => Either::A(ready(user.clone()))  // : Either<Ready<User>, impl Future<Output = User> + '_>
        None => Either::B(db_user_lookup(username))   // : Either<Ready<User>, impl Future<Output = User> + '_>

and of course just writing it as an async fn in the first place makes this all trivial

async fn get_user_data(username: &str) -> User {
    match local_cache.get(username) {
        Some(user) => user.clone()              // : User
        None => db_user_lookup(username).await  // : User

This I believe is impossible in a statically typed language like Rust without an unintuitively magical await operator, nesting a future changes the types and if you choose to return an impl Future<Output: Future> for some reason (instead of just .awaiting the inner future internally) then that’s an API choice that should have some significance.


Might as well be technically correct about all this.

The A+ Promises Spec is the current standard for how promises (the closest analogue to futures) are supposed to work in JavaScript. Within this spec, a “thenable” is any object that defines a “then” method, even if it fails to meet the full spec criteria for being a “promise”. They make this distinction because:

This treatment of thenables allows promise implementations to interoperate, as long as they expose a Promises/A+-compliant then method. It also allows Promises/A+ implementations to “assimilate” nonconformant implementations with reasonable then methods.

Also, a big part of the reason why JS promises do automagic flattening is simply that because everything is dynamically typed, it’s much easier to reason about your code that way. For example, Promise.resolve(x) will produce a promise no matter what x is; you don’t have to worry about checking whether x is a promise or a pre-A+ thenable or just an ordinary number or string.

None of this logic directly applies to Rust, since everything in Rust is statically typed by default, and there’s no swarm of pre-existing future implementations that we want to interoperate with the standard futures without making any code changes (futures 0.1 can be and has been changed to aid migration).

For Rust, I do think it would make more sense to NOT do any automagic flattening. Since everything needs a concrete type, and part of a Future’s type is what it resolves to, we couldn’t make it truly automagical in Rust anyway.


Many things already work this way, within limits of Rust’s type system, thanks to the IntoFuture trait.

However, returning of two different types from a function is a different problem. This comes up in most uses of -> impl Trait, and requires workarounds like Either. If it’s going to be solved, it should be a solution that isn’t special-cased for Futures.

1 Like

Thanks for exploring this it was very illustrative.

Eagerly applying .await seems like a problem, though?

I’ve always tried to postpone the application of await until as late as possible — otherwise you’re just blocking the application unnecessarily.

Here I’m assuming that you care about the time it takes to run the current code to completion — if you instead just want to keep the cores busy and if you’re processing lots of requests in parallel, then blocking on many tiny async should be fine.

Much of the discussion so far has revolved around how .await makes it easy to block until results are ready so we can do error handling and so on. There has been little or no attention paid to staying in the “async world”.

An await in an async function doesn’t actually occur until the function result is awaited (or polled in a non-async context)… right? Thus the await shouldn’t matter. Please correct me if wrong - I very well might be.

That’s not my understanding, but I’m also not 100% sure about this :smiley:

If the .await doesn’t actually trigger anything, then it feels like the compiler could insert it automatically every time. Since the suggestion is to make await explicit, I’m pretty sure they have an effect.

To perhaps make things clearer, I’m saying that this function:

let google = fetch("https://google.com/").await;
let yahoo = fetch("https://yahoo.com/").await;
let bing = fetch("https://bing.com/").await;

will be slower than this function:

let (google, yahoo, bing) = join3(fetch("https://google.com/"),

I therefore believe we’ll see a fair amount of the latter, and less of the former. If so, I don’t like “hiding” the .await at the end. I would rather prefer:

let (google, yahoo, bing) = await join3(fetch("https://google.com/"),

so that the await point is clearly marked at the beginning of the statement.


I see what you mean now. In the context of the earlier example, it didn’t really make sense as there was only one await to be done:

But when there are three awaits, what you’re saying does make sense.

So I guess in summary - if you’re async code has to be sequential (e.g the next future requires data from the previous one), then postfix await is nice. When the async code can be parallelized, then postfix await can be an easy performance footgun.

Sure, there’s a difference between running multiple async operations in sequence to parallel, but that’s unrelated to flattening or your initial example. Applying .await inside get_user_data doesn’t stop the caller of get_user_data from running that operation in parallel with other operations, get_user_data just returns a future representing the composition of the synchronous cache lookup followed by a database lookup if necessary, calling it doesn’t do any real work. It also doesn’t have any opportunity for internal parallelism since there’s only one async operation (if the cache lookup was actually async then it could race the cache lookup and database lookup so that a cache-miss is slightly faster while a cache-hit is slightly more expensive, but that would be a very different function).

(Also, in my personal experience with async/await in C#/JS/Python the times when you can actually parallelise are relatively few, at least for web server/web UI the majority of async functions are straight sequential code).


I don’t think postfix or prefix is relevant here. let x = await foo(); let y = await bar(); is just as bad (or good) as let x = foo().await; let y = bar().await;

The real footgun is that in Rust it’s bad to let xTask = foo(); let yTask = bar(); let x = xTask.await; let y = yTask.await; (because that doesn’t poll bar until foo is done), when I’m used to things like that working fine in the C# model. (But it works that way for good reason, and that reasoning is independent of syntax. And thankfully join_all! is nicer than the equivalent in C#.)


In Rust this is only the case if you spawn the future (on a multithreaded executor if CPU parallelism is needed) since otherwise it won’t do anything until awaited.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.