The barrier here is very low, as in "You know await thing from JavaScript? The syntax in Rust is thing.await". Done. People are very capable of doing this transfer.
Details on whys and hows come later.
"You know await thing from JavaScript? The syntax in Rust is await thing or thing.await" will immediately lead to the question "but why?", throwing you down the whole rabbit hole of trying to follow that decision.
Not that this is different from deprecated syntax, where the teaching story is "use ?" and if people see try!, the followup is "it's deprecated syntax for ?, don't bother in modern codebases".
This is already planned by our 1-man UX team @ekuber,
I don't think this is a good idea. I want to cement .await as the syntax for a good while before considering supporting anything else. I want the debates about style to end (and not restart them by supporting await fut so early) so that we can get productive and focus community attention on building great libraries.
The rustfixable errors as planned by @ekuber will suffice.
A lot of conversation has happened since my previous comment, so I would like to add a clarification / follow-up. I saw some people quote what I said as an argument in favor of the dot-postfix. I prefer not having the dot-postfix. I like postfix, just not with the dot syntax. I think the postfix macro would work, or having the space-postfix. I really have one primary concern with the dot-postfix. If someone can address that concern, my reservations will vanish.
I am concerned about a developer who joins a company using Rust. This developer has to spin up on Rust quickly. They know the basics, but they don’t know everything. They get put on a PR to start contributing, or they need to read through the codebase so that they can understand it and start contributing. They see x.await. Because it looks like a field access, they may assume that it is. This difference between expectations and reality can be costly.
In my experience, this is how most people learn to code. They do it iteratively. They do it through exposure. They don’t sit and read through the entire manual and then start. They will pick it up by reading through other people’s code at work and working on their own side projects. If we have gotchas and snowflake behaviors, we will drastically slow the time it takes for someone to feel proficient and comfortable contributing. And if it takes longer for someone to spin up on the language, then companies won’t adopt the language because they can’t afford that extra time.
That type of thing really turned me off about Scala. In general, I like a lot of it, but the language has so many (what feel like) snowflake syntax behaviors that some code bases are illegible even to an experienced Scala developer. Sure, you could say that this syntax is not a snowflake behavior because it’s a keyword, so <dot><keyword> is fundamentally different and logically consistent. But that’s really just being pedantic and begging the question. That may be true if you already know about it. That’s not the important part. If you already know it, then you already know it and you won’t be confused by it. If you don’t know it, then reading through someone’s code will either lead to you making an erroneous conclusion on the behavior of the code or confuse you further and hinder your ability to contribute. If you don’t already know that Rust distinguishes between those two behaviorally, then it looks like snowflake behavior.
What exacerbates this problem is that it feels like Rust goes through a lot of effort to be consistent elsewhere and avoid the appearance of snowflake/confusing behavior. When first coming to Rust from Scala, I was annoyed that you could only implement a trait for a type either in the crate that defined the trait or in the crate that defined the type. I thought that was annoying because it differed from Scala, where you can implement something wherever you want. Then I consider the amount of time that I have spent in Scala trying to track down which implicit type class happened to be in scope from all of the import x._ calls. I have probably wasted hundreds of hours on that problem. It’s technically “consistent,” but it doesn’t appear or feel consistent because the behavior is so obtuse and non-obvious to someone reading the code who does not already know the code. Scala was really good at allowing for code that was clear and obvious to the person who wrote it, but not to anyone else or even to the author in two months’ time. I’m concerned that this will nudge Rust in the same direction.
But Scala is this way because people wanted to save on a few characters here and there to make it more “elegant.”
IDE syntax highlighting is not the answer because the language exists outside of any particular IDE, and I don’t think it’s safe to make assumptions about the IDEs that people use any more than its safe to make assumptions about the resolution of their monitor, the modernity of their browser, etc. In the Scala example above, it was assumed that there wouldn’t be a problem because you can just rely on type hints from your IDE. It is dangerous to assume the context of someone else’s work environment, and I think it is best to not tie the language to any particular work environment and punt the problem to the UI. That is contrary to every usability philosophy I have ever heard.
And sure, you can say that it won’t be a problem because it’s a keyword so it could never be a field access, but that again assumes that we’re talking about the person writing the code. If the person is writing .await then they already know .await, and they’re not the audience we should care about. We need to care about the audience that encounters await and doesn’t already know it. Will it be obvious to them without having read the instruction manual? Saying that someone won’t cut their fingers off on a table saw because the instruction manual says “don’t cut your fingers off with this blade as it is sharp” feels very naive. Similarly, Tesla says that you shouldn’t take your eyes off the road when using autopilot, but how many people do because it’s available and easy? Yes, the saw has warnings. But somehow people severely injure themselves every single day.
Returning to Scala, it introduced flexibility much like this at the request of expert users who wanted their stuff to be more elegant, less verbose, etc. It introduced so much power that it’s incredibly easy to write code that only you and your friends can read. You can bastardize it completely if you so choose. The common argument I’ve seen used is, “Well that’s their problem. My team just won’t write code that way.” But this just kicks the problem down the road. It accumulates tech debt. Any code written by people who were implicitly told “the language will let you do this, but it’s not my problem if you do” will eventually end up maintained by someone else who wouldn’t have written it that way. Obviously, you can’t completely avoid that problem because users will be users, but do you want to make it easy on them to write messy code?
From a theoretical perspective, you can claim something like a postfix macro is “less logically consistent” because this behavior can’t be implemented as a macro and so it’s lying to the user, whereas <dot><keyword> is a language feature distinct from <dot><identifier> and thus perfectly consistent. From a strictly mathematical perspective, you’re not wrong. But being technically correct does not make it usable. What matters is what you implicitly signal to the user. Case in point, the Google G logo looks like a circle, but it is not a circle. It’s mathematically imperfect because mathematical perfection looks bad. The more important thing is that <dot><keyword> is easily mistaken for <dot><identifier> and that will cause issues. Any solution will cause issues, yes, but I believe that this will cause far more misunderstandings than other alternatives. For someone reading the code, you need to make sure your logically pure syntax does not lead to practical mistakes.
I don't think that you can learn Rust quickly, so I'm a bit wary of this argument. I also think that the vast majority of people will learn Rust from the Book, and we can prominently show thd await syntax there.
Also if you are reading code to learn from it you will likely have syntax highlighting, so you will see that await is different from a normal field.
Moreover await can't be a field because it is a keyword, this means that if someone tries to use it outside of an async context they will be given an error telling them what await is. Same if they don't use it on a future.
Thinking of await as a field access, while wrong, will still lead them to write ok code (read: not technically incorrect, but may be inefficient). So I don't see why it is that bad.
Overall, I think good error messages will help sort this out.
As an observer of the discussions who abstained from posting in order to avoid adding more noise (and because I’ve only used await a couple times in other languages, I don’t feel like an expert in the trade-offs involved in any capacity), I wanted to thank the language team for achieving consensus on a final syntax, allowing to move forward with the async/await feature.
I appreciate that the chosen syntax is motivated by clearly expressed rationale, and that alternative syntax have been explicitly considered and the reasons for not choosing them addressed in the team’s communications. I also appreciate to have visibility on the schedule leading to the final decision and stabilization of the feature.
That being said, the exact syntax is but one aspect of the problem. What I really look forward to is what having an await syntax will enable, leveraging zero-overhead futures (powered by concepts such as Pinned types) in a hopefully vibrant ecosystem of asynchronous libraries. I also look forward to other coming features, such as const generics (I can’t wait for that one!!) or specialization.
So, I feel sorry for adding yet another message in this already very long thread, but I guess I wanted to offer some positive chime here
I'm not sure I can picture a specific way in which this type of confusion would potentially cause the developer to introduce a bug, or even cause a significant delay in writing new working code based on the old code. Do you have a more concrete explanation of how you think this misunderstanding would be "costly"?
In particular, I think that in order for there to be a concrete problem caused by this misunderstanding of this syntax, the developer in question would need to somehow (accidentally) avoid encountering any error messages indicating that .await isn't simply a field access. This seems kind of unlikely to me, if the developer is actually misusing the keyword as a field access in a way that introduces a bug.
I have not heard the word "snowflake" applied to language design before. What do you mean by this? If you just mean that the syntax is somehow fragile, what kind of "breakage" are you envisioning?
I absolutely agree with this. (We've also mentioned screen readers in this thread, which of course can't have syntax highlighting.)
I don't have any specific examples handy. I would consider any time that an application behaves differently than expected to be a cost, though, such as a syntactic element that seems to imply no side effects having side effects. Scala allows def foo = {} and def foo() = {}, which can be invoked with x.foo and x.foo() respectively. To its credit, it is recommended that the former is only used for things that have no side effects and a pure and effectively field accesses (although it may be doing some calculation from actual fields under the hood, it behaves like a field access). The parentheses are used to imply side effects as it has become a common convention. Both are still functions, though. However, even though that is considered best practice, it's not actually enforced and that can lead to sloppy code.
object Test {
var x: String = "testing"
def foo: String = { x = "no way"; x }
def main(args: Array[String]): Unit = {
foo
println(x)
}
}
This will print "no way."
Sorry, I was using snowflake in the way that it was used to refer to server administration. Snowflake servers refers to how difficult it is no matter how hard you try to keep all your servers in sync with kernel and library versions and all the daemons that are running on them. Inevitably, they end up with some minor difference, making them snowflakes because no two are alike. Deployment technologies like Salt, Docker, Kubernetes, etc. are intended to skirt the problem by containerizing everything or declaratively specifying what is present on the servers.
Unless we introduce a fun new screen reader with tones of voice so that keywords are shouted, literals are mumbled, and identifiers are spoken like a depressed turtle
What you are saying is true from my experience, but, I still find the notion that people don't read all the manuals and documentation before starting to code in a new language or use a new framework to be completely counter-productive. I've never understood why anyone does it that way. It just seems like such a lost opportunity.
After all the ‘thinking out loud’ that has been done on the thread I started yesterday here I have come away convinced that changing field access deserves its own RFC and that sneaking this big a change into the language design as part of the async/await RFC feels like something that a lot of people will only accept if due process were followed. Even if we end right back at @withoutboats’s last proposal.
In lieu of this a different sigil from . should be chosen, I think “line noise” is a weak argument to have such a broad scope for this RFC.
If careful consideration can be demonstrated I think a lot of this can be put to bed quite easily.
Can you elaborate on how an RFC would help, here? It'd just be a document to read -- like the paper and blog post -- another GitHub thread -- we've already had one -- with an FCP for people to comment -- like they're doing now -- and then the lang team would make a decision 10 days later -- as is about to happen on the 23rd.
Most of the discussion of syntax alternatives here already happened in the RFC, just without a final decision being made.
People can tick quite differently. Some need practical problems at hand
to be able to learn. Without the problems they can have quite a hard
time learning, finding meaning and motivation in the learning.
Telling them to just sit down and learn most likely won’t work that well
and they might suffer quite a bit trying it.
Because something works very well for you doesn’t mean it has to be the
case for everyone.
I'm not sure what more can be done to demonstrate careful consideration. An RFC at this point seems like purely a formality, and to me does nothing to signal careful consideration.
There are many threads about the syntax, and the lang team has been reading them. That's pretty impressive considering the sheer volume involved and the low signal to noise ratio - it has mostly been rehashing the same arguments. The way this has been conducted screams "carefully considered" to me. Multiple blog posts, large decision timeframes, and being (what I see as) transparent.
People don't read the entire language model up front before using a language for the same reason that we teach kids Newtonian physics first instead of just handing them a quantum mechanics textbook. Or the same reason that we don't teach programming by first handing them the spec sheet for the x86 processor. People have to bootstrap their knowledge by building small mental models and building on them iteratively and developing an intuition around each successive piece of the puzzle. That's why we have hello world projects. We need to have a learning feedback loop. There's a fair amount of psychological research on the matter, but it more or less comes down to a loop of developing a internal model of the world, making a prediction on how the world will respond to some input, and then seeing how accurate the prediction was and adjusting the mental model accordingly.
That's why anything that will knowingly add a temporary breakage in the mental model that had developed up to that point represents a jarring user experience. I think that <dot><keyword> having side effects as in .await, while technically consistent in the domain of the language, is very likely to break any mental model that a programmer had developed over the course of them learning the language.
The Design of Everyday Things deals with this usability problem with regard to tools and machines. A programming language is really just a tool to build things, so I'm trying to make sure we consider the usability of the tool and make sure the tool is intuitive for people who are going to use it. If we can avoid them having to read an error message (even if they are able to read it once and never make the same mistake again), then we should do so.
Semantic arguments about how much documentation people should be expected to read are off-topic for this discussion about await, and dismissing an entire field of study is off-topic and inappropriate for the entire forum. Please try to bring it back on track.
Hi, newcomer and non-expert here, but I will like to share my 2cents. I have
come to prefer the dot operator as the sigil for postfix await syntax. What
comes after the dot, I’m not too sure since I don’t know enough yet, but “dot
await keyword” seems like a reasonable choice. What follows are my personal
musings, but I’m hoping it will help “connect the dots” around some of the
topics I saw on the dot operator.
At first, I liked the idea of “universal pipelining”. But as I read more
on it, I noticed several mentions of how the dot operator already or can serve
this purpose in Rust (1, 2, and most
interestingly 3). This helped me gain a more nuanced
understanding of the dot operator.
In fact, when I take a more holistic view, I realize that the dot operator is
already doing some magical things and not just plain old field access. I like
how @jcsoodescribed it, and @scottmcm’s notion of
“namespaces”. My interpretation is that . let’s you do:
struct field access, when given a named field identifier;
universal function call, or typically “method call”, but with the nuance
that the method is not a member of the struct but rather of the type and
so there’s magic to perform UFCS.
But it occurred to me that there’s a third thing, and that is tuple
indexing. I haven’t seen tuple indexing mentioned in the various threads yet,
so I think this is “new information”. Essentially, when I read of people
mentioning “conflict with field access”, I subconsciously think of struct field
access. But when I put tuple indexing into the picture, things start to click
and that dot-await no longer seemed weird but rather just fits.
Consider foo.12 for a minute. 12 could not be a member of foo, since
digits alone are not allowed as identifiers. So rather than field access, this
is doing some magic under-the-hood for indexing into the tuple. When I was
first learning tuples in Rust, my first reaction to this was “What on
earth?!!”. But I learned it, used it, and it quickly became “This is so cool!”.
For me, in the case of foo.await, I see await is a keyword and so it’s not
a field identifier, thus prompting me to think of “magic” and relates it to tuple
indexing. Obviously, futures are not tuples and awaiting is a totally different
beast than indexing. I don’t know much about the implementation, the macros,
executors, polling, etc, but if we’re just talking about the syntax then my
take is that: dot-await will be weird like how tuple indexing is weird, but
it might just become “normal” like how tuple indexing is normal nowadays. If we
go with the notion of namespaces, then tuple indexing introduced a new
namespace of indices, while dot-await will introduce a new namespace of
keywords, and these are unified under the dot operator.
As an aside, something else that I find amusing, is that the other operator
that deals with namespaces is ::, which is a bunch of dots.
I’m sure some things in my thinking might be flawed or shallow, but that’s the
conclusion I had drawn for myself, and others may draw a different conclusion.
I will like to thank the lang-team for working through this and thank the
community for sharing! I have learned a ton reading through all these
discussions.
Some people have a misunderstanding that the dot operator is pure, while this is encouraged it is not true. @yufengwng pointed out, the dot operator does some magic, but this time with Deref.
While this example is contrived and is an extreme anti-pattern, it does prove the point that we can’t really trust the dot, unless we know something more about the types.
struct Foo;
struct Bar;
impl Foo {
fn do_work(&self) { unreachable!() }
}
impl Deref for Bar {
type Target = Foo;
fn deref(&self) -> &Foo {
loop { do_some_blocking_io(); }
}
}
// .. later ..
Bar.do_work();
unreachable!();
use std::ops::Deref;
struct Foo {
value: i32,
}
struct Bar {
foo: Foo,
}
impl Deref for Bar {
type Target = Foo;
fn deref(&self) -> &Self::Target {
println!("Firing the nukes!");
&self.foo
}
}
fn main() {
let bar = Bar {
foo: Foo {
value: 10,
},
};
// Not a method call, but it still has side effects!
bar.value;
}