(Note though that, given that JS code's run-to-completion semantics, the situations are not entirely comparable.)
This is an interesting idea for readability, though I think it would get frustrating to type
It is not, unfortunately, and it is quite a complicated code base and system, but I'll try to present a simplified but relatively faithful reproduction.
It looked something like this:
async fn execute(ctx: Context, input: InputValue) -> Result<Value, Error> {
let input_a_and_b = ctx.get(input.x).build()?;
// Step A.
let future_a = async_action1(input_a_and_b.clone());
let input_b = ctx
.get(input.y)
.build()
.await?;
let (flag, value_b) = match input_b {
Enum::A(a) => {
let modified_a = a
.map(|x| x.modify())
.unwrap_or(compute_other_value());
let (flag, input_3) = async_action2(&ctx, modified).await?;
// Step B.
let output_b = async_action3( input_a_and_b, input_3, async_action2(..).await? )
.await?
.method()
.await;
(flag, output_b)
},
Enum::B => {
(false, Ok(None))
},
};
let future_c = async_action4(flag);
// Step C.
let output = futures_preview::join!( future_a, future_c);
match output {
// Nothing relevant for this example.
// ...
}
}
The context for this code is that it fetches some data, schedules processing in a few systems, and waits for some results. It's complicated by the fact that some part of the scheduling happens in bridged C++ code.
Step A Constructs a future which, during executing, at some point increments a counter. Note that due to Rust future semantics, future_a does not start executing until step C.
Step B First fetches some flag which is needed for future_c
and then calls async_action3
. The last await at step B is the bug.
Step C joins on the two futures.
The bug is a subtle ordering issue which resulted in the execute
method never finishing. async_action3
runs some foreign code that waits until a counter is incremented, which happens during execution of future_a
. (note they share some common input). It then schedules some processing, and returns an acknowledgement with a id.
Correct code would, instead of await
ing at Step B, would have either done join!(a, b, c)
or first the join!(a,c)
and then await on the future from Step B.
This also wouldn't have happened in other async/await
languages since they start executing immediately, but this is unrelated and just something you have to learn when adopting Rust.
This took me ... a long time to find. ( the real code is also has quite a bit more going on)
Now there are all kinds of ways this could have been prevented, like abstracting the interdependent code into a separate function.
It could also be argued that this bug could easily happen in any case and that the syntax isn't really a differentiator here, and that .await
vs @ await
, |> await
or prefix await wouldn't have made a difference. I could understand that argument.
I guess for me it's partly the fact that appending .await
is just so easy to do during in normal code flow that it doesn't provide a mental breakpoint.
With prefix await, you either have to know that you want to await
when starting to write an expression, or you have to recognize the need for it, backtrack to the start and insert the await. Personally this definitely causes me to re-evaluate the code and re-consider the await.
Some other construct like postfix pipelining could also provide a similar benefit if it's more associated with control flow. @ await
is really not much harder to type than .await
. It is a lot noisier though when reading it again!
It is also clearly a distinct concept from field access.
( I probably read over the function above dozens of times before figuring out the issue)
This ties into your other question.
I’d like to drill a bit into why it is important for it to stand out in git diffs.
Suspending can have a big impact on how code executes, especially so in Rust because futures need to be actively polled to make progress, which makes ordering your suspension points correctly much more important than in , say JS or Python.
The only way to draw your attention to a .await
is syntax highlighting.
It's really quite easy to skip over it in longer call chains, or in nested expressions (like the inline .await
in my Step B).
To me, relying on syntax highlighting to recognize control flow is definitely a detriment. Plenty of contexts have no syntax highlighting, like some code review systems, git diff, or chat clients. It's also a language design question, should control flow be easily recognizable?
An interesting comparison here is prefix await, because in theory the same could apply there. I think this is improved by two factors:
await
at the beginning of a expression is usually quite noticeable, since it's the first thing you read and immediately frames the expression in a async context- In practice, prefix await rarely is used in nested expressions (* in my experience*). If I had to speculate, I'd say this is because
- nested awaits get awkward very quickly due to the parentheses required
- they can obfuscate what's going on and so are naturally avoided.
As mentioned, this is partly improved by .await?
likely being a very common pattern. I'd venture that .await?
could likely be quite a bit more common than .await
. And, to me, .await?
is much more attention-drawing due to the extra sigil, and probably because you always watch out for ?
anyway when reading Rust.
In conclusion, I appreciate how difficult this decision is, and I'm very happy to not be on the lang team.
Somewhere I saw someone say learnability is not a deciding factor because there hasn’t been enough research into it for programming language learnability.
While that may be the case, I’d point out that there are general design principles for learnability, one of which is for having a consistent set of rules (see the book “The Design of Everyday Things”). This allows the learner to only learn the rules rather than memorize every single case.
So I’m a bit torn right now. On the one hand, “.” has been established to mean “operates on that instance”, which is a valid meaning for ‘await’. And since the developer has to know the name of await in order to use it, you could argue it’s not increasing the cognitive complexity. And I think it does ‘flow’ better to have it postfix since you’re focusing on the return type after the expression. And await modifies the usage of the result rather than the input, so it provides for a clean left to right flow.
What is confusing for readability is the lack of any other notation on await - there’s no signaling that it’s special short of syntax highlighting. The rule that .name is a field now has a caveat that it may actually go and invoke some code that takes an arbitrary length of time. But in this case the rule would be ‘await’.
This also reminds me of an embedded programmer complaining about C++ vs C because of cases where C++ can invisibly invoke functions (eg destructor or operator overloading) making it harder to read. This ideology of explicitness was actually why I thought Rust didn’t have properties.
I suppose the deciding question I’d ask is if await truly is a one-time thing and the only special “instance keyword” or “member keyword” or if there are going to be other postfix ‘keywords’, or ways to add postfix keywords (eg procedural macros). In this case it would make more sense to have a special operator since this would make the rule of postfix keyword style being the important thing, with await being one of many possible instantiations.
And I know it’s easy to say “Yeah, sure, await will be the only postfix keyword we ever need to support”, but a lot of times that’s how technical debt begins.
Thank you for sharing your experience!
I'd just like to mention that in practice prefix await
would almost always be part of a nested expression, because you would have to do (await foo())?
Taking a code snippet from your example:
let input_b = (await ctx
.get(input.y)
.build())?;
This is a huge difference from other languages like JS or C#, which don't require the parens (because they use implicit exceptions).
In addition, I see multiple areas in your code example which use .await
within expressions (including method chaining). This is how it would look like with prefix await
:
let output_b = await
(await async_action3( input_a_and_b, input_3, (await async_action2(..))? ))?
.method();
This does not appear to be an improvement to me (quite the opposite).
I find it incredibly hard to read, and to figure out what is being awaited (I have to mentally jump back and forth from the beginning to the end, and also take into account the parens).
In contrast, your code example is extremely clear and easily readable: it is obvious which expression is being awaited, and there is no jumping back and forth. The control flow is linear, so it can be read from left-to-right (or top-to-bottom).
This is already the case due to Deref. Await doesn't actually change this.
I already said many months ago on Reddit that postfix await is a much better fit for Rust, due to the ?
issue, and I still hold that opinion.
I only used some nesting and chaining because postfix syntax makes this much easier.
My arguments for allowing prefix await were always framed with the mindset that a postfix variant would be actively pursued immediately, but introduced in a consistent way that covers more use cases.
I’m very much pro postfix, but in a more generalized form.
(ps: with prefix, you just tend to structure your code differently, and introduce more locals to avoid nesting. )
With JS or C#, I would tend to agree. But even very simple non-nested examples with Rust lead to excessive local variables. Let's look at this example again:
let input_b = (await ctx
.get(input.y)
.build())?;
This is not a nested expression, and it isn't a chained method either. It's the simplest usage of await
: awaiting a single top-level expression.
The only way to avoid the ugly and confusing parens is to introduce two variables every time you await
:
let input_b = await ctx
.get(input.y)
.build();
let input_b = input_b?;
That seems quite excessive to me. Since prefix await
isn't even viable for the simplest examples, I don't think Rust should have it at all.
(I do agree with you that a more general postfix pipeline syntax would be nice though)
As a follow up to @Centril’s comment, I have a PR out that to tackle these suggestions that can be automatically applied by rustfix
Thanks so much for working on this! You are the best .
A postfix await keyword with an alternative sigil (other than .
dot) seems to have much more ongoing interest and support than is represented in @withoutboats' blog post or @nikomatsakis' summary of these proceedings.
From the prior dropbox paper on postfix syntax:
Unusual other syntaxes: Use a more unusual syntax which no other construction in Rust uses, such as
future!await
.
If that was indeed the best of the "unusual syntax" considered, then its not surprising it was subsequently dropped. Repurposing !
in this way would seem to have all sorts of parsing problems, for human and compiler alike. What I hadn't seen until recently, was that this syntax debate has been ongoing in various corners for well over a year, including:
RFC 2442: Simple postfix macros, comment by @withoutboats:
Method syntax today is a type dispatched resolution system and macros are not today type dispatched. This has always been the reason we have not supported method syntax macros. This RFC proposes to resolve this issue by giving up the connection between method syntax and type dispatched resolution. I think that is a significant loss, and I am not convinced that the usefulness of postfix macros justifies that change to the language.
[...]
Finally, I don't think
await!()
is a strong motivation for this RFC. The async/await RFC hasawait!()
as a compiler built in to avoid deciding on the final syntax, but await expressions are not macros and can have whatever syntax we like.
How is it that method call syntax is sacred, but field access syntax is not so sacred?
Postfix is great, and yes we can keep the familiar await keyword. Just by changing the leading sigil away from .
dot, we can avoid confusion with field access and gain clarity that this is a very special feature.
future¡await
Suggested Rust-specific nickname for U+00A1 ¡
: postbang.
Rust source code is UTF-8, and going beyond ASCII has the unique advantage of a sigil character with zero prior or current use in the language. RFC 2457: Allow non-ASCII identifiers has been accepted though is not yet implemented. Since ¡
is punctuation, it also isn't available as a regular identifier character under this RFC. Postbang ¡
is as easy to type for many world users as ~
or ^
. Its clearly marked on even the US-International keyboard layout.
Since ¡await
would initially only appear in async
fn's and blocks, ¡
postbang might only ever be used for this purpose. However, it could also later be expanded as a generic named-postfix operator sigil (implemented by macro or compiler built-in) or even for universal pipelining.
For example, a postfix ¡try
could have been done this way, it if was deemed essential to keep the "try" word in place. (But don't get me wrong, I like how postfix ?
turned out.)
✓ avoids any possible confusion with field access or function call
✓ has no current language use, including with RFC-2457
A more substantial Fuchsia inspired example:
future@await
In the context of an async fn
, where the await points just need to be annotated for the compiler's state machine surgery, @await
also rings true for me.
✓ avoids any possible confusion with field access or function call
✓ Is a simple shift-accessible character on many keyboards, including US-international
✓ current use is limited to the pattern binding operator ident @ pat
A more substantial Fuchsia inspired example (by @magnet):
Please, do not forget not everyone uses US-International keyboard, e.g. on my laptop with default setup (it has Russian and English keyboard layouts) the standard hotkeys for ¡
do not work. Initially I though that ¡
was a joke, but since people seriously do consider it, I must warn everyone that adopting it quite probably will be a huge disaster for many Rust users from non-English and non-Spanish countries. So pretty-pretty please lets stick to ASCII.
I use compose
-!
-!
(US keyboard, Gnome) for postbang myself. Do none of the wikipedia (first link you quoted) suggestions work for you? Otherwise would you accept the @await
alternative as being a strict improvement over the current .await
?
Unicode code point entry works for me (which to put it mildly is really unergonomic to use while programming), but none of the others. And even if they did work, the mere need to explain those hotkeys should disqualify ¡
sigil. As for @await
, I guess being the author of the universal pipelining post should be self-explanatory.
¡So did I! But then I started liking it.
What I was trying to ask is: would you accept a not-initially "universal" @await
syntax, as a strict improvement of the the current .await
not-field-access syntax? Or is it universal only for you?
Save one character and make a terrible pun with @wait
, and you can count me in! Further, the extension to future single-sigil @
once Stourstrup’s rule has taken effect is obvious!
(Not meant as a serious suggestion.)
@wait
has good (IMO, keep Rust weird) cute/weirdness factor, but I’ve been assuming that using “await” 2018 reserved keyword, had more sanctity than field access syntax, at least.
Aside: @wait
is definitely cute
I don't read that comment as a worry about the proposed syntax (rather I interpret the opposite, as treating the syntax as reasonable) but about the semantics of how to map an ident to the definition to apply. Given that keywords cannot be user-defined and thus there's no resolution to do, I don't think .await
violates the sanctity of type-oriented dispatch after a dot.
While I follow your logic, I respectfully disagree that .await
would be anything but an exception to the normal syntax for field access. Therefore any of ¡await
, @await
, @wait
, #await
, »await
, etc. have an advantage.
Just to further explore this point, it does seem that .await
being so easy to use sets an unfortunate default for writing asynchronous code, namely serializing execution within your code (obviously, calling code can benefit from asynchrony by setting up multiple asynchronous tasks to await in parallel). If you just use async/await, you aren't leveraging asynchrony in your own code, you're only propagating it.
I don't think the solution is to make propagating asynchrony more difficult, though. Maybe there's some way leveraging asynchrony could be made just as easy to reach for, or at least nearly?
Strawman: deferred await
One idea would be to allow you to defer awaiting a future so you can set up several asynchronous tasks that can be awaited in parallel. Strawman:
let x = async { qux(async_foo().defer?.baz(), async_bar().defer) };
Basically, this translates to
let x = async {
let a1 = async_foo().map(|v| Ok(v?.baz()));
let a2 = async_bar();
let (b1, b2) = Future::join(a1, a2).await;
qux(b1?, b2)
};
(Throw in a try { }
block if needed to provide a target for ?
if async { }
itself isn't one.)
Of course, defer
is used with a somewhat different meaning in other languages, I just haven't come up with a better name. I also deliberately limited this to one expression because it seems like things would start to get weird if you, say, used a variable bound to the result of a deferred computation several statements later without any sort of annotation indicating that defer has poisoned that later statement. If you have to wrap the result of the expression in a future, then using it would force you to either use defer again or use await.