Learnability considerations of ".await" syntax

ALT+> is Windows-keyboard specific. The Mac version would typically be OPT+>. ALT on a Windows keyboard is located in roughly the same place, relative to the space-bar, as OPT on a Mac keyboard. So on a Mac keyboard one mentally substitutes OPT (option) for ALT (alternate).

The usual way to get characters such as these that are not directly represented on keyboard keys is to chord keystrokes, such as SHIFT+ALT+? to get ¿, or ALT+< to get . That’s the usual way that [ , ] , { , } , | , # , $ characters are typed on a keyboard that otherwise doesn’t have them.

Programmers chord commands all the time (e.g., ALT+F10). Even American English words sometimes require chording (e.g., naïve – notice the diaeresis over the i, which specifies that the ai is not to be pronounced as a diphthong). It seems perfectly reasonable to me that native-English-speaking programmers should occasionally need to chord a character, just as virtually everyone else in the world is required to do.

Relative to the issue of a new sigil, I’ve already commented on the lack of necessity of introducing such a sigil before stabilizing .await. However, the issue will not go away; .match and pipelining are obvious future candidate uses for such a distinguishing new sigil, as I’ve pointed out in some of those prior posts.

1 Like

I don't see how anyone would ever teach field access that way, any more than they'd teach "function calls are foo(x), except for return(x) or when it's a tuple struct".

But to take a step back here, if someone looks at .await

  • How likely is it that they'll think it's a field?
  • How long will the misunderstanding last?
  • How much hard could it do in the mean time?

As has been said many time, even the most basic syntax highlighting will make it incredibly unlikely that the confusion would happen in the first place. I can even highlight using the wrong language (C# here) and it's still clear:

async fn foo() -> i32 {
    bar().await.qux
}

Even without highlighting, how long can it persist? As soon as you lookup the bar function to figure out its return type, you'll see that it doesn't have an await field. So you then know it's something else, and you go look that up, which you find immediately.

But how did someone new even get to that point? Wouldn't the new reader see the async fn and immediately wonder what that means? Certainly the explanation they find of that construct will include an explanation of await.

Suppose none of that happens, and maybe I just remember Task<>.Result from .NET and decide not to worry about things and assume that it's some weird field that gets the value out. I can probably stumble along quite well thinking that -- better than using it in .NET in fact, because its .Result is synchronous. And if I try to use it outside of an async context, I get a compiler error.

So overall, I just don't see a realistic problem here. Especially in conjunction with how differently async/await works in Rust compared to other languages (with things not running unless polled, etc), which means that nobody will be effective using the feature without reading an introduction to it specifically.

10 Likes

(Just a minor note: American English spells this "naive", and "naïve", though not technically incorrect, is unidiomatic to even archaic, and at least my Android spellcheck actually marks "naïve" as wrong and autocorrects to "naive".

I know of no usage of chorded symbols in American English on an American QWERTY keyboard. (Which doesn't even have AltGr, just two Alt keys.) To the point that standard symbol chords are seen as "power user" features at best and "easter eggs" at worst.)

EDIT: I wrote "not correct" instead of "not incorrect"

2 Likes

I consider the unavailability of auto-deref the worse threat to the readability of .await than familiarty issues. A lot of effort has been spent during polishing of the language to ensure the access to methods and fields of sub-structures is as ergonomic as possible and current code should and will thus never look like this:

`(&**box_ref).method()

With the current concept of .await it may be a bigger problem than obvious from smaller examples or even code written by experienced programmers. &mut impl (Future + Unpin) also implements Future, and so it is perfectly legal to await some future that is only accessible through another structs passed in as a reference, for example a reference to a box as above. So, inferred knowledge would suggest that field-like await follows the usual patterns of allowing auto-deref to happen and allow this by automatically mapping to a mutable reference to the inner value. However, this intuition completely fails for anything but the special cases explicitely implemented in the standard library which make it work by not leveraging their DerefMut implementations. I think this will cause confusion in two ways for new-comers:

  • Some cases seem to work as would be obvious (e.g. Box). This makes it less likely that one looks them up to realize that they have been special cased.
  • When then some case doesn't, how would one discover this restriction? Not through looking up fields. But it seems backwards and unintuitive that one should have to use syntax that is outdated everywhere else in the language for being too verbose, and which is the opposite of discoverable for newbies since it involves knowledge of both ops::DerefMut and finding the correct impl for references for Future.
  • That is, only when something already more complex than usual needs to occur in code, the special casing of .await comes along and makes that code require even more complex additions. I surely suspect this to throw many off-path and make them resort to unecessary boxing etc. just to get within the familiar bounds where everything works with mechanism as if by auto-deref.
8 Likes

Actually impl Future for &mut F where F: Future exists in the standard library already, same with Box<F> and Pin<P> where P: DerefMut, so auto ref wouldn’t have to happen to await a future. Now custom DerefMut impls are going to be a problem, but I don’t think that they will be common in user-code, which is where this feature will be used.

1 Like

I think you're misunderstanding my point. That is one of the special cases that exist, to enable some code to await a future without auto-deref. However, that does not enable all cases where auto-deref would result in a &mut impl (Future + Unpin) that could be awaited. See Are blanket Future impls necessary? · Issue #60645 · rust-lang/rust · GitHub . Only with auto-deref would the impl Future for &mut F where F: Future be enough to cover all cases where field syntax has equivalent outcome.

e.g &mut MutexGuard currently requires (&mut **guard).await afaik

2 Likes

Well in that case you could imagine a function on Future called by_ref, which we can use to do the auto-deref and then await.

trait Future {
   // other stuff

   fn by_ref(&mut self) -> &mut Self where Self: Unpin { self }
}

// .. in user code ..

let x = Mutex::new(future);

let x = x.lock();

let x = x.by_ref().await;

Which isn’t too bad, this is similar to how Iterator has a by_ref

1 Like

Maybe, but that would require modification of the Future trait which is already stable. While we still have the option to avoid having to create workaround, why not define the syntax and semantics in a way that makes them entirely unecessary?

For a completely different purpose. There are some methods on Iterator taking self by-value, the purpose of by_ref is to avoid that. See below, actually maybe a neat solution.

2 Likes

I'm pretty sure that adding default methods to an existing trait is not a breaking change. So we could do it, and it wouldn't even be a large change.

Because as stated in the github issue you posted, the auto-deref logic is complex, and they want to minimize its usage in the compiler

Yes, but we also don't have MutexGuard<dyn Iterator>: Iterator, so it's not completely off base. Without by_ref you can't use the Iterator combinators unless you use by_ref or deref it on your own.

2 Likes

Right. Then Future::by_ref makes more sense than I initially thought. Definitely for the option of field based .await syntax. The naming conflict with Iterator,in the prelude, seems a bit unfortunate but nothing that bikeshedding can't fix.

2 Likes

My motivation for the ¡await suggestion was:

  • sympathy for postfix but with concern for the cost
  • wanting to signify something macro-ish and special, as opposed to conflating field access,
  • with preference over @ (seemingly too meta/declarative in an expression position) or # (comments elsewhere)

The humor was given the enormity of the syntax debate threads, and with full expectation that this late suggestion would not be taken nearly as seriously as a few of you have granted. I appreciate that!

Given the constraints, it’s really amazing that postfix ? was ever accepted and worked out so well to replace prefix try!.

2 Likes

That's a pretty unfair summary of my post. I haven't accused you of anything. I pointed out the situation around the subject and have mentioned that your post does not give a concise argument beyond "has anyone considered"? And it indeed has no actionable points to address, except waving to Rust learning curve (which has much less to do with example. The notion your post give is indeed fuzzy, opening a debate that is very hard to reasonably hold.

You have not addressed any of my points, except taking an example from natural language, which is famously full of non-strict rules with exceptions, especially english.

Also, our target group aren't children.

This is a serious dismissal of experience. Can you please clarify how you think it is appropriate? I'm fine with talking about a lot of those points, but I'd ask you to change your approach.

I gave you a list of points to address and I appreciate that you also have experience, but please stop turning this into a personal fight.

panic! is clearly a macro and defined as such, rolling out to panic and panic_fmt, allowing you to conveniently using the formatting machinery of the compiler.

panic and panic_fmt are also clearly functions, using an language intrinsic to trigger panics.

3 Likes

This Reddit comment raises the same issues; see @Centril's reply there.

I certainly agree with this principle. I doubt you'll find many that don't.

Unfortunately, there are many other principles pulling in different directions here (as discussed in other threads and posts, so I won't rehash them here).

The hardest decisions to make are those where there's a conflict in strongly held values, and not all of those can be satisfied simultaneously.

1 Like

@skade your opening response sets a very specific tone, making my very rare contribution immediately feel foolish. Secondly you clearly didn't read my post fully because I never proposed @ as the preferred case, so I'm not going to "answer" your points, I even liked your post like most others because the points make sense. All this while you "credential and argument bombed" me mostly on points that I weren't trying to make.

As you can see above I didn't even disagree with you that my post was fuzzy - I'm sorry for that, as a novice contributor its always intimidating to step forward. I'm asking questions as a background part of the community. Feedback was explicitly solicited by @withoutboats - this is what we're all trying to give to the project. If feedback just ruffles feathers then don't ask for feedback.

This is a bait-and-switch move that makes it seem like I'm trying to dismiss your experience, which I was not, the point stands, certain natural language rules even trip up grown-ups, but especially children, we don't have to fall into the same trap if we can avoid it. And just btw, the very first language I exposed my 10-year-old to was rust, I don't see why we can't teach kids rust - but that's a conversation for another day.

There's nothing to address, I don't owe you any answers, I was simply asking, originally, whether or not the Language Team also considered the impact on Learnability after we spent so many cycles on ergonomics and learnability.

You have successfully made this personal, I don't know if this is await fatigue or a general approach to feedback but the topic can be closed anyway because I'm convinced that very careful consideration was given, thus answering my original question.

7 Likes

I personally like the idea of implicit awaits in an async function. If a statement requires to be async or fall through, then I vote a defer keyword.

Most of the time in my experience, in an async function, your code is just simply littered with awaits to other async functions, With the opposite approach which requires an async statement to be explicit, its is instantly clear which calls just falls through without checking the function signature if it is declared as async or not. The blocking behavior is also consistent with the synchronous version without use of await keyword.

See Explicit future construction, implicit await and Would implicit `await` really be a bad idea?. The decision the language team and the community agreed on is that while implicit await works well as a system, it doesn’t fit with Rust’s value of local semantic clarity. We want calling async fn -> T and fn -> impl Future<T> to do the same thing, and we want them to do the same thing in a sync and an async context.

An implicit await system doesn’t work for Rust because it breaks local semantic clarity of code: it makes the meaning different depending on context.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.