Is it necessary to refer to the unified executor proposal of C++ to improve async in Rust?

The p2300 proposal seems to be a big killer for future C++ to provide a unified view of async computation with or without coroutine opted in.

Similarily, both Rust and C++ choose to use lazy future to express an async operation. However, since the design principle of coroutine in Rust and C++ differs, there're many differences in the async model.

In p2300 proposal, C++ gives a unified view of sender/receiver and coroutine based async models. And since the coroutine scheduling is handed by await_suspend in C++, it's still quite easy to switch execution context in coroutines. Without coroutine:

auto some_work() {
    auto snd =
        // start by reading data on the I/O thread
        ex::on(io_sched, std::move(snd_read))
        // do the processing on the worker threads pool
        | ex::transfer(work_sched)
        // process the incoming data (on worker threads)
        | ex::then([buf](int read_len) { process_read_data(buf, read_len); });
    return snd;
}
// execute the whole flow asynchronously
auto snd = some_work();
ex::start_detached(std::move(snd));

With coroutine

Task<> some_work() {
    // start by reading data on the I/O thread
    co_await io_sched;
    auto read_len = co_await read_data(&buf)
    // do the processing on the worker threads pool
    co_await work_sched;
    co_await process_read_data(buf, read_len);
}

// coroutine is also a sender in p2300 and can be consumed by sender/receiver algorithm
auto task = some_work();
ex::start_detached(task);

In Rust, async computation is driven by coroutine with await/async, and usually a top-level executor is needed to push the progress of async execution. So in tokio or other community async runtime, there's often a global executor registered to run the coroutine to completion. According to this video, it's hard to do this scheduling (context switch) with await in Rust.

Here, I have no idea if current Rust async design have the ability to provide the flexibility like what future C++ will give us. Or maybe we have a better solution or design for this. I think the ability of easy context switch (maybe from CPU to GPU or something else) is necessary to cover HPC with async programming instead of just IO intensive tasks.

So my question is whether it's necessary to take the evolution of async computation in future C++ into account when improve async Rust further?

Thanks.

1 Like

Rust's async supports this:

async fn some_work() {
    read_data(&mut buf).await;
    spawn_blocking(async { process_read_data(buf) }).await;
}

or if you want to be super precise about executors:

async fn some_work(handle: tokio::Handle) {
    handle.spawn(async {
        read_data(&mut buf).await;
        handle.spawn_blocking(async { process_read_data(buf) }).await;
    }).await
}

Futures that haven't been started yet can be freely moved to another executor, and any Future can .await another Future running on any other executor. You can use this to distribute work across various executors. This already works for e.g. combining tokio's I/O with GTK's event loop, or moving work from Actix's server-per-core executor to a global thread pool.

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.