You might, perhaps, but those ZSTs are rarely exposed and never actually written down in valid programs, so I doubt it would be a widespread problem.
What is the difference between a type and an item?
Also, since this annotation would only be permitted in trait definitions, the return type truly is somewhat generic (or at least polymorphic).
They're kind of completely orthogonal concepts at different layers of the language. I think the shortest direct answer would be that type definitions are merely one kind of item. See Items - The Rust Reference
Beyond syntax, regarding the original question—namely if the current default Send-ness for async fn
should change from ?Send
(maybe-send) to Send
, thus requiring a new explicit syntax to override back to ?Send
, IMO:
-
There are least-surprise and learnability advantages to all of trait async methods, async functions, and returned
impl Trait
having the same default. It would be a late change to makeasync fn
default-Send
and a breaking change to makeimpl T
default-Send
. -
I can understand the general concern, when defining a trait, of unintentionally missing bounds like
Send
, but that concern applies to many more cases than just async trait methods. Best we can do is highlight consideringSend
,Sync
, etc. early in the design of traits. Meanwhile, single threaded applications don’t want default-to-Send
async trait methods any more than they want default-to-Send
async functions. -
Ultimately
?Send
(maybe-send) is the better default, for all cases, because:-
When needed, an explicit override e.g.
async<F: Send>
orwhere async: Send
is easier to understand and teach than an override to ‘?Send’. That aSend
override will be more frequently desired and used, will just make it more familiar as well. -
It is what is currently implemented for
async fn
andimpl Future
. -
It is less committal: if I understood above comments correctly, future compilers may be able to infer
Send
in many cases, making explicit bounds unnecessary forasync fn
to just work in multi-threaded applications.
-
I'm not sure what you're referring to here -- I don't believe this to be true.
I was referring to the below comment. If the default was changed to Send
, then such a "clever" compiler inference wouldn't be meaningful. If my interpretation is overly broad, or incorrect, strike that bullet from the above argument.
I see. I’m not very confident on clever defaulting rules of this kind, I have to say. They will never be perfect and I think they’ll be rather hard to explain to people.
It’s also not really “future compilers may be able […]”, it’s a choice that can be made at the time async-fn-in-traits are implemented (which is admittedly in the future from today), but would be a breaking change to add later.
To what degree would a clever Send
compiler inference be a “breaking change to add later”? Firstly I am suggesting that if its added at all, it also should be added for async functions (non-trait). If the default remains ‘?Send’ in all cases, as I’m arguing, then adding compiler inference rules to make the return type of some percentage of async
functions and methods Send
should not be strictly breaking, should it? Send
is a subset of ?Send
, right?
In any case, I don’t think that is the core of my argument. If the default remains ?Send
then such inference is a possible but largely independent future ergonomics improvement to consider.
Taking the example from the OP:
trait Process {
async fn process(&self);
}
the straightforward expansion of this is
trait Process {
type ProcessFuture<'a>: Future<Output = ()> + 'a;
fn process(&self) -> ProcessFuture<'_>;
}
the “clever compiler” “do the right thing” expansion would be
trait Process {
type ProcessFuture<'a>: Future<Output = ()> + (if Self: Sync { Send }) + 'a;
fn process(&self) -> ProcessFuture<'_>;
}
(with some random syntax for conditional associated type trait bounds since they’re not a thing).
If you had previously implemented this trait for a Sync
type based on the original expansion, and returned a !Send
future, then you could fail to meet the new trait bound that is added by the compiler.
OK, thanks for explaining, my mistake. I had assumed that the clever compiler inference would apply, not to the trait itself, but any trait implementation (as well as non-trait async functions). But that probably doesn’t help, even for consuming code that is generic over Process
, at least while ProcessFuture
is unnamed.
My argument largely reduces to:
It would be better to have to write async<F: Send>
or where async: Send
frequently, than to have to write (and explain) async<F: ?Send>
rarely (assuming that single-threaded apps are less frequent).
Better to expend some extra typing in the interest of consistency, clarity and learnability.
… and explicitness, which is often touted as a Rust feature.
The original question is strictly about traits, however in the future it seems to me it may be interesting to extend this to function pointers (currently fn(String) -> String
) and function traits (family of Fn(String) -> String
).
In a similar vein to traits, those represent family of functions, and therefore the Send
vs non-Send
cannot be deduced from the implementation.
In this sense, I am wary of any logic that would be specific to traits (such as deduced based on Self
), as it doesn’t seem to translate well to non-traits situations.
I would argue that both are explicit.
That said, @withoutboats, @cramertj and I had a meeting this morning where we dove in more detail into the impliciations of each choice. I think @withoutboats is going to write a more detailed summary but the general gist was (at least from my POV):
- we are feeling good about the current point in the design -- i.e., async fn desugaring to
impl Future
-- because the set of points where end-users will have to care about this seems pretty small:- specifically, it's only when you must spawn from a generic fn or (perhaps more likely) create a boxed future (since
dyn
types must specifySend
explicitly) - the former is unlikely, the latter is a bit worrisome but "probably" ok
- specifically, it's only when you must spawn from a generic fn or (perhaps more likely) create a boxed future (since
- but we want a more convenient way to specify the "where the output of my futures is send" bound
- and something like
outputof(self.foo): Send
doesn't seem like it =)
- and something like
- we also think it's important to improve the diagnostic output
One concern is that shifting to have async fn
desugar to impl Future + Send
does carry some downsides:
- It encourages people to write generic adapters that require all the inputs be
Send
, even if that wouldn't really be necessary. - it's mildly inconsistent with how other parts of the language work, which tend to infer "send-ness" through auto-traits and use explicit bounds where that's not possible
I’m not sure how often I want non-Send
futures, but I have noticed that async fn
produces functions that are not Send
when it shouldn’t. It seems to leak the Send
ness of all expressions inside of it, even if those never leak. More specifically the problem comes from references to things which are not Sync
.
e.g.
async fn outer_future() {
let not_sync = NotSync;
takes_not_sync(¬_sync).await
}
async fn takes_not_sync(not_sync: &NotSync) {}
takes_not_sync
of course should not be Send
, but there’s no reason outer_future
shouldn’t be
Edit: Given that this is even a problem I’ve noticed, I guess the answer is I never want futures that aren’t Send
If you await a nonsend future, you yourself are not send because you store that future’s state within your own. While there are a few cases where we infer !Send state to be alive across await points when it isn’t, this isn’t one of them.
Can you elaborate on why that’s the case? It seems to me like the use of !Send
is entirely contained in a way which should be safe – and this statement means that no value which is !Sync
can ever be used by reference in an async function, which seems like a severely limiting constraint for API design.
If outer_future were Send, you could poll it in a thread until takes_not_sync starts to get polled, then send it to another thread and continue polling it there, leading to potential memory unsafety.
The conservative solution seems to be requiring to specify either Send or ?Send and make just “async fn” an error in traits.
Also note that any non-Send Future can be turned into a Send future given a local executor by asking it to be executed on the local executor (at the cost of some heap allocations), so non-Send futures don’t cause an ecosystem split or anything catastrophic like that (although multi-threaded executors need to have an API to spawn non-Send futures that obviously runs them only on the current thread).
The statement is specifically about await
ing a future, not merely referencing it. Since await
returns control to the executor/scheduler, all intermediate state, including that of the future being await
ed, must be safely capturable; this means that all limitations of the async function being called are exposed.