Hey all, this post looks at async overloading in Swift, how we could potentially translate it to Rust, and makes the case that investigating its feasibility should be a priority for the Async Foundations WG. Figured folks here might be interested in it (:
I think overloading is a terrible design pattern even over types, let alone along the axis of effects. If anything, Rust should move away from the direction of overloading, not towards adding more dimensions of overloading. (Overloadable operators are already not very well-discoverable, but at least they depend unambiguously on the receiver type.)
In addition, this bit sounds particularly nasty:
This is about as bad as Go's hard error on unused variables. It should (have) be(en) a warning. Not only for reasons of prototyping/refactoring, but also for the rare but possible case when blocking is really the only correct option.
I don't see a problem to be solved here. If you force the user to type out the async
anyway, there's no value in adding a whole new language feature just so can s/he can avoid changing io::
to asyncio::
. It doesn't alleviate any burden on the author of the API, either, since both versions of the function must still be implemented.
And anyway, there's a much neater design pattern allowing functions to be called either synchronously or asynchronously, and which works today. The return type can implement Future
as well as any other, synchronously-usable trait (or even lazy-load synchronous results through Deref
or something similar). So, it can be awaited or not, as desired.
I'm not sure what you mean by this. If you don't .await
a future, it doesn't run. some_async_func()
doesn't run synchronously, it doesn't run at all. And you can't implement a custom future that runs either synchronously or asynchronously, they're two completely different execution models.
You'd probably end up writing smth like this?
some_async_fn().await; // async context
some_asycn_fn().get(); // sync context
I am concerned about context-sensitivity. For example:
let x = vec![1,2,3].iter().map(|n| overloaded_function(n));
futures::try_join_all(x).await?;
This would be calling overloaded_function()
in a "sync" closure, but the code expects to have an iterator of Future
s for an async try_join_all
.
The simple context-based rule doesn't work here — the map
closure would invoke a sync result, and then user would get a type error in try_join_all
.
The context rule is also a backwards-compatibility hazard:
async fn foo(path: &Path) {
let _ = std::fs::remove_file(path);
}
This code works today and removes a file. With a remove_file
overload and context-sensitive application of it, it will change behavior to doing nothing without warning.
If you mean something like block_on
, then yes, that works. However, it doesn't always work well. The idea behind async overloading is for when you need two distinct versions of your method. Say async_read
depends on a reactor thread running, but read
is blocking. You can't simply block on async_read
for this to work, you'd need to spawn that reactor thread and then block on the future within that context. This is basically how things work now outside of std. To use an async library, you need to pull in tokio/smol/etc., even if you aren't using it asynchronously. Overloading doesn't solve the sync/async divide, but it does make for a better end-user experience.
I think the point is, given fn talk_to_server() -> TalkToServerFuture
, you could provide both talk_to_server.await
(async) and talk_to_server.idle()
(sync), and both could use the proper overloaded algorithm, the same as if the language overloaded it.
The only time you'd have an issue is if you started the async operation and then tried to switch to a sync wait.
In fact, if you offer two versions with the same signature, it's really not all that work to implement something close-ish today;
fn work_sync(a: A, b: B, c: C);
async fn work_async(a: A, b: B, c: C);
// becomes
struct Work { a: A, b: B, c: C }
impl Work {
fn go_sync(self);
async fn go_async(self);
}
In a future with stable Fn
traits, I can even see some "clever" library author doing both impl FnOnce
and impl Future
for a return type, (hopefully not making it horribly unsound if you start the future,) and having it do that you call work(a, b, c)()
for sync and work(a, b, c).await
for async. Please DON"T, but that's my free cursed idea for the day.
Ah, I see. So something like this:
fn foo() -> FooFuture<impl Future<Output = ()>> {
FooFuture {
f: async move {
// the async version
}
}
}
struct FooFuture<F> {
f: F
}
impl FooFuture {
fn sync() {
// the sync version
}
}
impl Future for FooFuture {
fn poll(&mut self, cx) -> Poll<()> {
pin(&mut self.f).poll(cx)
}
}
At this point, it's much more involved and not really any better than foo | foo_async
.
I agree that clever tricks to allow both foo.sync()
and foo.await
in just stable Rust are much more trouble than they're worth in reduction of namespace.
And even if it is possible for a library designer with more cleverness than sense, it doesn't solve the core issue discussed by the OP; that of basically doubling the std API surface.
And even assuming that the building blocks have the same shape other than async
, there's the additional wrinkle that some meaningful portion of functionality is just plumbing together the building block functionality, and doesn't meaningfully change between sync and async. rayon
has done a good job proving that concept for straight-line/parallel iteration at least. Especially if all the names are the same, it'd not feel great to have to write the same function twice just to forward the "effect genericism" to the caller.
It's an interesting concept, but the reason Swift gets away with it is that func removeFile()
and func removeFile() async
don't have the same name. One is called removeFile()
, the other is called await removeFile()
, and there's no risk of confusing the two.
I think it's definitely worth exploring what plan we want to have for putting more async functionality into std (silly non-proposal: use (async std)::path::to::item
) before committing to stabilizing any more, but I'm fairly sure that effect overloading is the wrong way to go about things.
Philosophical question: if you already have a compatible async runtime available (and ignoring reëentry problems), is it ever (meaningfully) better to use a sync implementation of some functionality rather than async functionality, if you're just immediately blocking on it? Assuming an equivalent quality of implementation.
It seems that if the waiting is purely serial, the only difference would be in the overhead of sync versus async OS interfacing, and in the case of being able to wait concurrently, the sync call would effectively just spin up a miniature async runtime anyway.
I also wonder what would be different if we had gone for the "explicit async, implicit await" version, where calling async code (where you immediately await) looks exactly like calling sync code. While still fond of it as a design, and still agreeing that explicit await is a better design for Rust, I do wonder if that would give more space for implicitly choosing between sync and immediately-awaited-async based on async context.
That does suggest as possible model for async overloading that avoids the biggest issue of confusion, though (while addressing sync-to-async compatibility but not async-to-sync):
Three calling options:
-
do_work()
: sync operation, warns in async context -
async do_work()
: async operation, returns future -
do_work().await
: async operation, awaits future, requires await to be immediate
It breaks the "always can extract a binding" rule, but it follows the guideline that makes "explicit async, implicit await" work, of marking the deferral on the call. The prefix doesn't get in the way with async call()
the way it does with .await
, because you're not going to (typically) be chaining a method call onto the returned future, you're going to be storing it into a binding or passing it as a value argument to some function. And we could always just use do_work().async
instead of the prefix operator, since the immediate .await
has to "retroactively" change do_work
anyway.
I'm definitely not proposing this, but I don't hate it either, and it's something the async WG could think about along with OP. I'd be more on board if it didn't not work for async-to-sync evolution, but that's always going to have to be the case in one direction, since to be compatible, a bare call has to be whichever version stood alone before there were both options.
I think "equivalent quality of implementation" may not be an assumption we can ever make. Even the simplest I/O reactors like smol are inherently difficult to write and require separate polling implementations for most OSes (and thus are less portable). If having as simple and reliable as possible of code is a primary goal, and handling 10,000's of simultaneous connections is not a requirement, then I think std
's sync IO is a wise choice.
I'm very well aware.
I'm not pulling this out of thin air, if that's what you mean. I've seen this done in crates (which I can't unfortunately find right now) – the essence is, again, that the return type is a polymorphic object that implements Future
so you can do func().await
, but it also implements other traits and methods, which in turn allows you to use it in blocking code, too, eg. func().do_something_sync()
.
But then so is this proposed feature. It just doesn't… add much value at all.
Sorry, I was just trying to understand what you meant. The only crate I've seen that (used to) do this, is rusoto with FutureExt::sync
, and that was just sugar around block_on, which still required tokio to be running.
I think it's definitely worth exploring what plan we want to have for putting more async functionality into std (silly non-proposal:
use (async std)::path::to::item
) before committing to stabilizing any more, but I'm fairly sure that effect overloading is the wrong way to go about things.
It's worth noting that C# simply has async versions of methods/interfaces. Stream.Read
and Stream.ReadAsync
, Stream.CopyTo
and Stream.CopyToAsync
,
IEnumerable
and IAsyncEnumerable
, etc.
I would instead prefer async polymorphism, for example:
?async fn foo() -> i32 {
bar().await + 42
}
foo
is ?async
, so it can be called either asynchronously or blocking. Therefore bar
must be ?async
too. When the function is called in a blocking fashion, .await
expressions within the function have no effect.
This is more limiting than overloading, because both versions of the function have to use the same source code. To make this feature more useful, there needs to be a way to detect if the function is called as async or blocking function:
pub ?async fn foo() -> i32 {
if is_async!() {
foo_async_impl().await
} else {
foo_sync_impl()
}
}
The compiler should monomorphise this so that two functions are generated. is_async!()
should expand to a compiler intrinsic that is const
, so the branch can be optimized away.
This has the same problems regarding backwards compatibility as async overloading, so the standard library can't make existing functions ?async
. It would however be useful for crates like reqwest which have an async API as well as a mostly identical blocking API.
When it's ambiguous which version should be called, a prefix could help:
fn synchronous() {
// underscore needed because async is a keyword
let _: impl Future = async_#foo();
}
async fn asynchronous() {
let _: i32 = sync_#foo();
}
pub ?async fn foo() -> i32 {
if is_async!() {
foo_async_impl().await
} else {
foo_sync_impl()
}
}
That looks really awful. Depending on a boolean parameter, branching on it, then writing two different implementations is a common anti-pattern. Usually, it is suggested that the function body be split into two distnct functions instead of a gigantic top-level if-else.
Except for the fact that here the preferred "refactoring" is already the status quo: one implements separate blocking and non-blocking functions. So why start supporting a software engineering practice at the language level when it is generally recognized to be bad?
This has several advantages to two separate functions:
- It guarantees that the async and sync versions have the same parameters and return type
- It makes a nicer API
- It prevents someone from accidentally calling a blocking function in an async function (it's still possible to do, but not by accident, since it requires adding a prefix or a closure).
It's not that gigantic, each branch is a single line of code. Besides, this is not supposed to be common practice. It's needed in low-level operations, but higher-level code can probably use the same function body for sync and async versions without branching on is_async!()
.
Here's another idea for a syntax that doesn't need is_async!()
:
?async fn foo() -> i32 {
bar().await + 42
}
#[sync_equivalent = bar_sync]
async fn bar() -> i32 { ... }
fn bar_sync() -> i32 { ... }
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.