A lot of conversation has happened since my previous comment, so I would like to add a clarification / follow-up. I saw some people quote what I said as an argument in favor of the dot-postfix. I prefer not having the dot-postfix. I like postfix, just not with the dot syntax. I think the postfix macro would work, or having the space-postfix. I really have one primary concern with the dot-postfix. If someone can address that concern, my reservations will vanish.
I am concerned about a developer who joins a company using Rust. This developer has to spin up on Rust quickly. They know the basics, but they don’t know everything. They get put on a PR to start contributing, or they need to read through the codebase so that they can understand it and start contributing. They see x.await
. Because it looks like a field access, they may assume that it is. This difference between expectations and reality can be costly.
In my experience, this is how most people learn to code. They do it iteratively. They do it through exposure. They don’t sit and read through the entire manual and then start. They will pick it up by reading through other people’s code at work and working on their own side projects. If we have gotchas and snowflake behaviors, we will drastically slow the time it takes for someone to feel proficient and comfortable contributing. And if it takes longer for someone to spin up on the language, then companies won’t adopt the language because they can’t afford that extra time.
That type of thing really turned me off about Scala. In general, I like a lot of it, but the language has so many (what feel like) snowflake syntax behaviors that some code bases are illegible even to an experienced Scala developer. Sure, you could say that this syntax is not a snowflake behavior because it’s a keyword, so <dot><keyword>
is fundamentally different and logically consistent. But that’s really just being pedantic and begging the question. That may be true if you already know about it. That’s not the important part. If you already know it, then you already know it and you won’t be confused by it. If you don’t know it, then reading through someone’s code will either lead to you making an erroneous conclusion on the behavior of the code or confuse you further and hinder your ability to contribute. If you don’t already know that Rust distinguishes between those two behaviorally, then it looks like snowflake behavior.
What exacerbates this problem is that it feels like Rust goes through a lot of effort to be consistent elsewhere and avoid the appearance of snowflake/confusing behavior. When first coming to Rust from Scala, I was annoyed that you could only implement a trait for a type either in the crate that defined the trait or in the crate that defined the type. I thought that was annoying because it differed from Scala, where you can implement something wherever you want. Then I consider the amount of time that I have spent in Scala trying to track down which implicit type class happened to be in scope from all of the import x._
calls. I have probably wasted hundreds of hours on that problem. It’s technically “consistent,” but it doesn’t appear or feel consistent because the behavior is so obtuse and non-obvious to someone reading the code who does not already know the code. Scala was really good at allowing for code that was clear and obvious to the person who wrote it, but not to anyone else or even to the author in two months’ time. I’m concerned that this will nudge Rust in the same direction.
If you only know the basics of Scala, you wouldn’t even know where to begin to decipher this: https://github.com/scalaz/scalaz/blob/series/7.3.x/core/src/main/scala/scalaz/Kleisli.scala
But Scala is this way because people wanted to save on a few characters here and there to make it more “elegant.”
IDE syntax highlighting is not the answer because the language exists outside of any particular IDE, and I don’t think it’s safe to make assumptions about the IDEs that people use any more than its safe to make assumptions about the resolution of their monitor, the modernity of their browser, etc. In the Scala example above, it was assumed that there wouldn’t be a problem because you can just rely on type hints from your IDE. It is dangerous to assume the context of someone else’s work environment, and I think it is best to not tie the language to any particular work environment and punt the problem to the UI. That is contrary to every usability philosophy I have ever heard.
And sure, you can say that it won’t be a problem because it’s a keyword so it could never be a field access, but that again assumes that we’re talking about the person writing the code. If the person is writing .await
then they already know .await
, and they’re not the audience we should care about. We need to care about the audience that encounters await
and doesn’t already know it. Will it be obvious to them without having read the instruction manual? Saying that someone won’t cut their fingers off on a table saw because the instruction manual says “don’t cut your fingers off with this blade as it is sharp” feels very naive. Similarly, Tesla says that you shouldn’t take your eyes off the road when using autopilot, but how many people do because it’s available and easy? Yes, the saw has warnings. But somehow people severely injure themselves every single day.
Returning to Scala, it introduced flexibility much like this at the request of expert users who wanted their stuff to be more elegant, less verbose, etc. It introduced so much power that it’s incredibly easy to write code that only you and your friends can read. You can bastardize it completely if you so choose. The common argument I’ve seen used is, “Well that’s their problem. My team just won’t write code that way.” But this just kicks the problem down the road. It accumulates tech debt. Any code written by people who were implicitly told “the language will let you do this, but it’s not my problem if you do” will eventually end up maintained by someone else who wouldn’t have written it that way. Obviously, you can’t completely avoid that problem because users will be users, but do you want to make it easy on them to write messy code?
From a theoretical perspective, you can claim something like a postfix macro is “less logically consistent” because this behavior can’t be implemented as a macro and so it’s lying to the user, whereas <dot><keyword>
is a language feature distinct from <dot><identifier>
and thus perfectly consistent. From a strictly mathematical perspective, you’re not wrong. But being technically correct does not make it usable. What matters is what you implicitly signal to the user. Case in point, the Google G logo looks like a circle, but it is not a circle. It’s mathematically imperfect because mathematical perfection looks bad. The more important thing is that <dot><keyword>
is easily mistaken for <dot><identifier>
and that will cause issues. Any solution will cause issues, yes, but I believe that this will cause far more misunderstandings than other alternatives. For someone reading the code, you need to make sure your logically pure syntax does not lead to practical mistakes.