Raising the bar for introducing new syntax


Thanks! Even just some small feedback for the community towards what discussions would be more productive would help I believe.

It could allow for a lot more focussed back and forth I think.


^^’ Yes. “they read all the comments” sounded better than “at least one or more language team members involved with the particular feature read(s) the comments” In particular I am happy how the async/await story is handled. A lot of feedback is given and a lot of constructive discussion is going in these RFCs.


[Moderator note: Could any further discussion of named parameters please go into a separate thread? The digression is making this thread harder to follow.]


Aaron is my boss and we have 1:1s every week. The number 1 thing we spend time talking about is community feedback & how we need to adjust our goals, strategies, and designs for the projects I’m working on in accordance with that feedback. Participants in internals and the RFC process have a huge influence over the direction of the project.

But also keep in mind that writing posts takes time and effort, and the number of concurrent discussions (and the length of each discussion) has only grown over time. It’s frequently the case that I incorporate feedback from users into the next iteration of a proposal without having time to respond in specific to many (or sometimes even any) particular user comments on that issue.


I think @mgeisler is referring to things like this being introduced to the language when it has no benefit to most of the people using Rust on a regular basis: https://github.com/rust-lang/rust/pull/47947


Wouldn’t it be nice to be able to specify code transforms that would allow you to upgrade your existing usage to the new usage. Like structural search and replace in resharper. Then the rls could say this code needs upgrading - press alt-enter and I will fix it up for you.


I agree with the general sentiment of the OP, although I don’t have any specific criticism of any specific feature proposal.

Language concision is a common good. A small language benefits everyone. Yet preserving concision is a tragedy-of-the-commons situation. The responsibility for keeping the language small is diffuse, whereas the proximate benefits to each individual of shepherding his favored proposal into the language is acute.

Rust’s RFC process is local hill climbing. From any given state, it can find a better language, but it also risks finding a trap: no guarantee of global optimality. From a given state of the language it’s possible to look at almost all of these proposal and agree that they’re locally beneficial to a language without the features. Yet, if you look down the road to a future where all of these proposals are approved, you might have preferred a future where none of them are.

There’s no formal way to prevent this kind of problem. Best you can do is to add a “size” term to your cost function to regularize in favor of a smaller language. So far, I think the Rust leads have lavished the language with extravagantly good judgment and discipline, and I trust them a lot. But the worry is always there.

It might help to make small size an explicit goal of the language?

Edit: just took a look at https://github.com/rust-lang/rfcs/pull/1925. Doesn’t really bother me. It’s not a new feature and it doesn’t increase cognitive complexity . There is a small chance it makes the parse more difficult.


From any given state, it can find a better language, but it also risks finding a trap: no guarantee of global optimality.

It’s worse than that: Given the multidimensionality of the design space of programming languages (e.g. memory/type safety, ergonomicity, runtime performance, compile time performance) there is a very strong likelihood that there is no actual global optimum to be found at all.


Yes, I like almost all of the design of the language – I think we all do, since we’re here discussing it so passionately :blush:

I had forgotten about that syntax and I would probably have suggested leaving it out.

However, I would like to say thanks to @withoutboats and @yarrow for pointing out in the GitHub thread that the discussion on the RFC in question was reopened after all the down votes were noticed. No further arguments against the RFC were presented, so it eventually got accepted.

That’s really encouraging to hear, kudos to the language team and others for trying so hard to listen to everybody :+1:

Maybe it’s not there as an explicit goal since it’s difficult to quantify? The other goals (such as memory safety and fearless concurrency) are technical goals and you can better understand when you’re violating them.


I agree with the desirability of a consistent language that does not duplicate features across the board and that there should not be a myriad of ways to do the same thing. I think that in relative terms to expressive power, a good language is a small language. That is, the more expressive power, especially to program the type system, you can cram out from a restricted set of features, the better – it’s a sort of eating your :cake: and keeping it too proposition.

For example, const fns and GATs are massive features in size, but they are also a massive enablers. The same would be true of dependent types for example. Here, being without them because of the desire for a “small” language is not good for everyone.

So I would not agree that a good language should be small language in absolute terms, and I don’t want that to be an explicit or implicit goal of the language.

As a general note, I’d like to deprecate the use of “small” + “language” because it does not communicate well what is really meant. Also, I think it will be very hard to find a formulation of policy which sufficiently captures how we’ve balanced the trade-offs in the past; so I think it might be for the best to continue to evaluate things on a case-by-case basis of the motivation, and the proposed solutions.


They’re not, though. They may take a lot of design and implementation work, in the same sense as the quote “If I had more time, I would have written a shorter letter,” but they’re not large in terms of “what does it do?”

const fns run code at compile time. GATs add type parameters to associated types. That’s not large.


“Large” is always a subjective quality: The features are large compared to what reference point? For example, they are quite large compared to most features being merged right now. I’m fairly sure that implementing GATs would take a lot more code, discussion, consensus forming, careful thought and debugging to get right than something like simple slice patterns, or the ability to return a Result from main().

Conceptually of course changes like const fn and GATs are easy enough to grasp, so we might say that they are conceptually simple. But that is a poor measure of how large a feature really is IMO, exactly because it only accounts for the concept itself and no other factors.

To give a somewhat extreme analogy: The core principle of a nuclear fusion reaction is quite easy to grasp, I could explain it to a 5 year old. But to actually get a working, commercially viable fusion reactor built and make it operational is a much much harder challenge, to the point that exactly 0 humans have achieved this to date.


The question is why are we even talking about “large” vs “small”? The reason people want a “small” language is not to avoid a lot of design work- indeed, the call for a small language is invariably used to argue for more design work.

This is why I brought up the quote about a “shorter letter.” Given a set of design goals, a small language that achieves them is much harder to design than a large one.

So if the people arguing for a small language aren’t arguing for less compiler code, less discussion, less consensus forming, less careful thought, or less debugging, what are they arguing for? It should be clear at this point that they’re (we’re) arguing specifically for “easy to grasp” and “conceptually simple,” in spite of the fact that this requires more discussion, more thought, and more debugging.

In fact, slice patterns are a perfect illustration. They have been unstable for literally years, because nobody did the surprisingly large amount of design and implementation work required to make them and their implementation sound. In fact, what we got in 1.26 is only a small piece of that work, which is still ongoing- moving out of slice patterns, patterns for “the rest of the slice,” etc.


A good feature can be arbitrarily large and costly, as long as it provides compensating value.

Biasing in favor of small size (e.g., adding a size term to the cost function) is just a way to formalize the notion that, all else being equal, a language with fewer concepts and less syntax is to be preferred. We can argue about how strongly we bias, but arguments about hyperparameters are more philosophical than scientific. An example of a tastefully done language that is more strongly biased towards smallness is Go. But, I’m hanging out in the Rust forums for a reason. I like the tradeoffs that are being made here a lot more.

One immediate consequence of biasing towards smallness is that redundant concepts are heavily penalized. If the language already has one way to express a similar computation, then adding another way gives little additional value. Other languages often introduce such additions in order to cover specific scenarios related to legacy code. The ability to proxy an object attribute access behind a function in C#, as well as C#'s overlapping concepts of extension methods, abstract classes, interfaces, are examples of features that are too big for their value. I hope that Rust will never, ever add a language feature whose primary value is that it can be retrofitted to some legacy code. Those kinds of tradeoffs are what eventually turns a language into the monstrosity of modern C++. (And C# is getting there).

I’ve been trying to avoid discussing specific feature proposals of Rust because criticizing or endorsing specific features tends to bring up emotions and feelings. But, it’s hard to avoid discussing value functions without applying them to examples, so I’ll take a few here.

One reasonable way to evaluate ‘smallness’ is to compare the benefits of a concept with other concepts that offer similar functionality. C#'s Generics + Abstract Classes + Interfaces + Extension Methods + Classes + Multiple Dispatch + Annotations cover a lot of the same ground as Rust’s generic traits. How many barrels of ink have to be spilled to explain generic Traits vs all of those other features? How much time and memorization is must be spent to internalize the former concept vs the latter concepts? There are ways to make this comparison.

Using the same comparison criterion, consider the async-await proposal vs generalized generators. I think the concept of generators is about as easy to document and internalize as the concept of async-await. But, form a functionality point of view, generators are a total generalization of async-await. If we had generators, async-await could just be a library that uses generators under the hood. For this reason I wish async-await hadn’t gotten in to the language - I would have preferred generators to async-await. (But, I hope that doesn’t side-track this thread about language size).

On the other hand, some features don’t seem to have any alternatives. One method of evaluation is to think from the point of view of a language student. How much effort does a student of the language require to integrate a new concept into his repertoire? By this criterion, the GAT proposal is a fantastic proposal. It integrates into a concept that already exists, and only expands it by re-applying a concept (type construction) that is already pervasively used throughout the language. It will take about one paragraph of extra text in the section on Associated Types to document this feature, and most readers won’t actually have to read any documentation at all. They will just see a type parameter in an associated type and immediately understand what is happening there by analogy to all the other places where type constructors appear in the code. As soon as that feature merges in, it will feel like it’s always been there.

One last thought. A mathematically minded friend of mine once taught me a trick. He said that if you can’t solve a problem, try solving a special case of the problem. If that doesn’t work, try solving a more general case of the problem. That’s some genius advice. I think it applies to programming language design. It often happens that general solutions are easier to grok than special case solutions. Finding the best level of generality is searching for smallness by another name.


I like what you’re saying but I have a different take on how it applies to async/await and generators. :slight_smile: Because you can easily run into scenarios where you want to use both simultaneously (i.e. Streams), the two features really cry out for different “channels” of the kind that “scoped continuations” or “effects” provide. So if we were going to have just a single language feature for both, I would rather it be something deeper.

However, this brings up another point about language size. @Centril has mentioned the idea of “a size/power ratio” a couple times, which I think is actually somewhat misleading, precisely because of this example. We could have a smaller language (in number of features) by providing just “scoped continuations” and turning async/await and generators into library features. We could also have a smaller language (again in number of features) by providing just “goto” and turning all other control flow into library features.

But this goes too far: now the individual feature in the language is no longer “easy to grasp” or “conceptually simple!” It is too powerful, too esoteric. Separating goto into if+loop+while+for, or separating “scoped continuations” into await+yield, still makes for a small language (the features are conceptually simple, but also orthogonal and easy to compose) while providing much more value than goto alone ever could (simpler tooling, easier reading, less possibility of running into one-off macro-based control structures).


Great point about why we care about size and also the example of goto. I should stop thinking about size and start talking about simplicity.

Would you agree that the programming language should be trying to minimize the amount of mental effort that’s needed for a programmer to interpret a source code?

Languages with too many concepts have programs that are hard to understand because you have to remember too many concepts, and there are too many ways to do accomplish the same ends, so that you don’t build pattern recognition. Languages with too-powerful concepts result in programs that are hard to understand because those powerful concepts are potentially doing too much.


Yes, I think required mental effort for understanding a program is a large part of it. I would even apply the same description of “too many ways to accomplish the same ends” to both problematic cases- when you have too many features that overlap, and also when you have too few features and everyone builds their own abstractions slightly differently.

In addition to goto vs if+loop+while+for, and continuations vs async+generators, I would add the example of classes in Lua (and to a lesser extent Javascript). The language has a very minimal implementation (actually quite well-done and an interesting read), but to get there it tries to do too much with a single feature- metatables. So now everyone has to write their own class system, or use one from a library, and you’re never sure quite how this codebase works, and none of the class systems are compatible with each other.

I would also add the example of Lisp macros. The culture in most Lisps is to basically build your own language out of macros, for every individual project. This is quite powerful, but also makes programs rather incomprehensible. For example, this recent thread on a book about Racket (a Lisp with unusually heavy use of macros) makes this exact complaint- nothing in the book is quite compatible with anything outside the book.


Ok; Then I can similarly go on to declare a bunch of various possible feature proposals “small” because they are conceptually simple to explain pre-rigorously.

  • total functions must always terminate.
  • HKTs allow quantification of type operators.
  • (value) dependent types, allow types to depend on values
    • pi-types allow function argument types and return types to depend on the (runtime) value of previous arguments.
    • sigma-types allow the type of the second half of a pair to depend on the value of the first half.

I must say; it is not at all clear to me what “we” mean by “small” and “large” and I am not getting any wiser as the thread is moving on and various things I assumed were large features are now simple => small. Particularly, I think the evaluation method of what is simple/small and large/complex is too arbitrary. If you want to (do you?) formalize this into a language team policy, then it must be much easier to tell what is what. I think this will be too difficult, so I don’t want us to legislate here.

Personally I believe that if a feature adds more compensating value to the language than it costs and is prerigorously conceptually simple to explain, it should be added. A language with fewer concepts is not to be preferred in my view. Of course, once you are adding more features which duplicate features and does not advance generality in significant ways, then the cost is still there but the compensating value is just much smaller. I think this flows naturally.

I wholeheartedly agree with you on all of these specifics. And I do think specifics are important because otherwise we will just be talking past each other. Is it right to see your view more as against redundancy than it being OK to limit expressive powers (because more expressive power usually requires more concepts… 1ML notwithstanding…)?

I do think personally that syntactic sugar is good to add in some cases proviso that it is really sugar and deugars into a simpler core language where soundness is easy to formally reason about.

It is conceptually simple to explain yes. However, all the things you can do with GATs that were impossible before will take some doing to explain. This is necessary so that people can turn new features into useful practice. So I think you are overstating the simplicity of GATs.

This feels like somewhat of a strawman. My points were mostly about the type system and not about control flow. How would “goto” work and how would it preserve soundness (which I take as an absolute requirement in all cases…)?

I do agree with this. I also think it is very uncontroversial.

What I don’t agree with is that this implies a language with a small number of concepts. Syntactic sugar, and general higher order constructs, which both increase the number of concepts, can help to massively minimize the boilerplate and plumbing away from code. I believe this to be essential to understanding the intent of a particular piece of code quickly and at a glance.

While we can reason about this in the abstract, is there any empirical evidence showing this? By analogy with natural languages, Swedish is a simpler language (grammatically at least wrt. conjugation of verbs) than French is – yet people learn French just fine and you don’t need to learn all of either language to converse in it. Likewise, I don’t think it is necessary for most rustaceans to learn what #[repr(C)] does until you are in that context of needing to talk to C. When you are, you can lazily learn that then and there.

What constitutes too powerful? Are dependent types too powerful? – More specifics would be good here so that I can reason about your criteria.

This is a great point; To actually talk the same language, we need a base set of primitives involving some syntactic sugar, such that you can learn common constructs once, and not have to learn them a bit differently for each project. I think most time spent learning a new language is mostly about learning all the important libraries in the ecosystem (rayon, rocket.rs, futures-rs, tokio, etc.) that make the language tick.


Sure, but this is about making the language small. That is, relatively few features that are collectively easy to grasp, that interact and compose well, and that provide a common language to talk about the language’s target domain.

That is, the guideline should not be whether a feature adds more value than its costs- that makes it all too easy to ratchet the language larger and larger each time we find a new shiny bauble. The guideline should be “perfection is attained, not when there is nothing more to add, but when there is nothing more to take away.” We should be okay with limiting expressive power when that power makes comprehension (by humans or the compiler) more difficult- e.g. HKT and inference, dependent types and type checking, lisp macros and control flow.

This is, of course, still relatively vague, but that’s okay. I think it’s fairly obvious in a lot of cases (goto, lisp macros), less obvious in others (HKTs vs GATs, async/await vs effects), but it’s a way of thinking that we should strive for.


They are arguing for addig fewer arbitrary features to the language, so that it doesn’t become a huge, impossible-to-understand frankenstein of everyone’s favorite (other) languages.