Raising the bar for introducing new syntax


Now that I understand what this practically entails (re. type system), I do not share this goal.

Well; Specifically wrt. HKTs and dependent types (the latter of which we are already adding in a limited form with const generics…), I do think they interact and compose well.

At least now I understand better what the practical implications of what you saying are, and I don’t particularly like them in the case of the type system.

HKTs are not a “new shiny bauble” to me (the same argument can be applied to many things we already have…), and are essential for my ability to abstract code in Haskell and other similar languages, and I don’t think they are particularly hard to comprehend. I did provide explanations that I find equally simple to those you used to motivate why const fns and GATs are small… Neither are dependent types. In fact, they allow you to model your domain more precisely (but you do lose some decidability properties with them…).

Again, this is much too vague to reason about. Should we just stop adding features, period? What features are not “arbitrary” and are acceptable to make it, in your view?


Fair enough; let me elaborate. I just discussed this with a programmer friend of mine today, so I have some fresh specific thoughts about the topic. I think Rust would benefit from the following, much more conservative approach:

  1. Instead of focusing on introducing completely new elements, shift the discussion towards fleshing out and stabilizing already-existing but unstable and/or half-baked features. This is definitely happening to some extent already, but I think people should put their efforts into discussing existing proposals more thoroughly, and designing their not-yet-ready parts in a more fine-grain manner, instead of coming up with completely new proposals. In other words, let’s finish the already-existing parts of the language properly before growing it at all.
  2. Related to the first point, I would argue that a feature freeze for maybe a couple of years would have another benefit. It would be possible to use experimental languages and research languages to try out new suggestions (not unlike the way that NLLs were first implemented in a separate repo outside rustc, IIRC). Then, once it’s clear which features are substantial and desirable, and which ones are just marginally useful or pure noise, then the actually-useful ones could be introduced to Rust. This would make things nicee in the face of Rust’s stability guarantees too: it would be far easier to decide on the design of a particular feature as fewer assumptions and predictions would need to be made in advance.

To answer your first question: I consider a feature “arbitrary” if it:

– merely serves minor convenience in a narrow use case; or it

– is only introduced to mimic some behavior of another language, in order to make programmers coming from that particular language feel more comfortable; or it

– has little consideration behind it apart from its immediate “surface effects”, ie. the designer/proposal author failed to consider its interaction with every part of the language, or failed to acknowledge a nontrivial drawback of the feature.


As you say…

and I agree with this sentiment – I think it is happening more than to some extent, given the impl period and such; but different people are interested in different things, and we can take advantage of this rather than push it away.

There are a bunch of things wrt. type system level features, that are well understood, and that Rust lacks. A feature freeze for a few years doesn’t seem necessary to me. The system with RFCs => Nightly => Stabilization already takes some great deal of time (particularly for a large feature), so imposing additional arbitrary limits where desirable new features automatically get postponed seems to just cause churn. What’s more, doing a feature freeze could make interactions with those new features worse wrt. the new features that are being worked on right now. The RFC process for new features can effect changes in RFC-merged but yet stabilized features.

Well; we promised to not make breaking changes to some very large extent; but wrt. new features that don’t break things we didn’t make such a guarantee.

I agree with those criteria in the abstract, but it is hard to see what it translates into wrt. accepting or rejecting specific proposals. I’d like concretion, especially wrt. new possible features you’d not view as arbitrary.


They are hardly essential. Many languages lack HKTs and yet people somehow manage to “abstract their code.” This is what I’m getting at- you want HKTs, when we already have plenty of tools, some filling precisely the same need as HKTs and others offering alternative mechanisms.

As @H2CO3 puts it, HKTs would only serve to mimic the style you are used to in Haskell. But why should rust mimic Haskell when you can already just use Haskell? It’s possible that Rust’s domain might need to solve the same or similar problems as HKTs do in Haskell, but that is hardly sufficient reason to go for HKTs when a vastly smaller change (GATs) can solve Rust’s particular subset of those problems.

The same goes for const generics- they’re a much smaller change than dependent types, which solves the problems that Rust actually has.


I’m afraid that if you still find those criteria too abstract, then the only way your question can be answered is by discussion on a case-by-case basis. When I express my opposition to a proposal, I always try to give a very clear reasoning of why I feel it should not be implemented. You have seen plenty of those arguments from my part in the past weeks. To be honest, I can’t find a level more general than the issues of each individual proposal yet still more specific than my three criteria above.

As an example of what I think would be a good addition: since Rust already has plenty of expressive power and syntactic sugar, I think two much more useful, substantial, and orthogonal family of features would be:

  1. Giving more power and in some cases, a better-designed API to procedural macros. (E.g. returning Result instead of panicking.) However, my understanding is that this is already happening as part of the Macros 2.0 stabilization process.
  2. Type system improvements. For example, there has been a proposal that suggested F#-like units of measure to be built in the type system. While I disagree that such a specific thing should be baked into the language, I would argue that it’s instead the type system that could/should be augmented with a general mechanism that could ultimately be used for implementing an idiomatic UUM library. (If I recall correctly, one specific issue with that was custom unification rules which should be able to express commutativity at the type level for dimensional analysis.)


I did not make a universal claim about everyone, they are essential for me (and many people who use Haskell, Idris, Agda, Scala, …). Many languages lack many things, but still people do get by; HKTs allow the user, in my view, to do it better.

Per Niko’s posts, they are equally expressive, yes, but it seems to me that GATs are much less ergonomic (and I’d say less readable and maintainable) than HKTs are. I’m not convinced that Rust’s particular subset of those problems are not well aided by HKTs, but this is a larger discussion for a later time. Once we have GATs, let’s see how it plays out.

(Aside: Const generics allow you to encode pi-types (so dependent types), but just for compile time values ^,-)

The anthropomorphization of Rust strikes me as odd; who decided what problems “Rust actually has”? Personally, I don’t claim to speak for everyone. Dependent types allow you to model general properties about your problem domain; I don’t see how this is specific to Haskell, Idris, or Rust; It seems to me equally valuable in Rust as it is in Haskell.

I do still think it is too abstract; and it suggests to me that trying to nail down a general policy won’t go well (and that we should therefore stick to a case by case analysis). My take-away from this thread is that the RFC process works well as-is, and artificial limits wouldn’t serve us well.


PS: I’m taking some time of from this thread now to do some other stuff wrt. editions; I’ll get back to your replies in some days.


Pardon the intrusion, but I feel that ‘arbitrary’ is a pretty clear and concrete word in this context. An arbitrary feature is one that makes no difference. That means it arguably makes no difference in a compilation artifact or people’s ability to create them.

So these are the things that would need to be argued to determine if a change is arbitrary:

  1. Can this change be expected to make some compiled code faster, more stable, or even possible?
  2. Can this change be expected to speed up the compilation process?
  3. Can this change be expected encapsulate some invariant that a compiler or library user could use to speed up or stabilize their own code?
  4. Can it be demonstrated that users are actually confused about the semantics of some existing feature that this will replace in a way that we can expect to avoid?

In my opinion, the Rust team has done a pretty phenomenal job of introducing relevant changes to the language and not fussing too much over the arbitrary ones. (I’ve personally even resisted some changes that I now have a lot of respect for.)


The guideline should be “perfection is attained, not when there is nothing more to add, but when there is nothing more to take away.”

I largely agree with this sentiment. This is also why I am particularly opposed to changes like impl Trait in argument position.

We now have 3 syntaxes to do exactly the same thing:

fn my_func(x: impl MyTrait)

fn my_func<T: MyTrait>(x: T)

fn my_func<T>(x: T)
    where T: MyTrait

This feels incredibly redundant. It is useless complexity in the language. Beginners have to learn multiple ways of doing the same thing.

I personally find the where syntax to be the best. It allows expressing trait bounds of arbitrary complexity without cluttering up the function signature. They are not sprinkled all over the argument list of the function, but rather clearly grouped and listed afterwards. It also has the highest level of flexibility; both of the other syntaxes have practical limitations that the where syntax does not.

Even for a single type parameter (as in the example above), I think it looks very clean and elegant. I think it is the most readable of the three.

I think all the other syntaxes are redundant and useless and should be removed (not possible due to stability guarantees).

I personally use it universally throughout my code, because I believe it is the right thing to do. I do not use the other syntaxes, since they are semantically equivalent, but either less readable (inline trait bounds at the definition of the type parameter) or have other limitations (impl Trait).

Of course, it requires a little more typing. However, most of the software development time is spent thinking, not typing. Also, it is far more important for code to be easier and clearer to read, than to be quick to write. I am personally very much against this whole idea of “shaving characters”. Introducing any language changes purely because you are too lazy to type a few extra characters is a bad decision IMO.

This might seem a little hypocritical at first glance, since I love the ? operator syntax. However, the main benefit that I see is not the fact that it is a single character, but rather that it makes code clearer and more readable (being a postfix operator, as opposed to try!) with no downsides. Hence, it goes along well with my philosophy that I outlined previously and is not hypocritical.


Number 1 criteria if you ask me. I couldn’t agree more.


I also prefer where syntax. In fact, I would not be entirely opposed to removing (edit: as in deprecating) the <T> part of that syntax entirely, along the lines of in-band lifetimes. This tends to work well in other polymorphism-heavy languages. And though of course there would be obstacles around Rust’s particular design, just as with in-band lifetimes, it would de-clutter function signatures without adding a new syntactic form with its own limitations.


I was wondering, is there any discussion in terms of establishing a policy of trimming features down the road? Similar to editions?

To elaborate: is there any discussion on what steps must be taken when the community wishes to deprecate a feature in stable? Akin to a Negative RFC?


It is possible to lint against certain syntactic forms, and after an edition gate, forbid them.

However, the compiler has to support previous editions, so no compiler complexity gains are gained by removing a syntactic form in later editions, and the compiler would want to still parse it and recommend the proper form anyway.

I think the way to go about soft deprecating syntactic forms would be rustfmt rewriting to the preferred form and/or rustfix-able clippy lints. Potentially after consensus has been proven in one of those two avenues, the lint could graduate to the compiler, but I doubt there’ll be any benefit to removing syntactic forms in future editions.

Hiding deprecated stdlib functions might happen, though.


This depends on what you see as important; I find that some trait bounds are essential to the semantics of a particular piece of code. The Rust syntax is a bit more verbose… But the Haskell version is quite neat:

sort :: Ord a => [a] -> [a]

From the signature alone (even without the function name), it is quite easy to tell what this function does, so it makes sense to me that Ord a is front and center. I think the where syntax should be reserved for when you have other bounds that are less important.

The limitations of the <T: Foo, ..> syntax are as far as I know not fundamental and can all be eliminated.

We can’t just go about removing the <T: Bound> syntax; It is used incredibly often and removing it would break a tonne of code. Not to mention that there isn’t universal support that where is better in all cases.

Besides… we just introduced impl Trait in argument position; I find it unlikely that we’d undo this move.


There’s no special policy of RFC flavor required for this. The dyn Trait RFC is essentially an example of deprecating stable syntax in favor of a supposedly better new syntax.

The big debate over what exactly “editions” should and should not allow happened in parallel with the dyn Trait RFC, and the community arrived at more or less the conclusion @CAD97 accurately summarized:


Yeah, it would be great if all of this stuff is deprecated (with sufficient advance notice) and replaced with a single uniform syntax in a future Edition.


The limitations of the <T: Foo, ..> syntax are as far as I know not fundamental and can all be eliminated.

Yes, the technical limitations can be eliminated.

However, my problem is with readability. It only looks nice in simple cases, like 1-2 type parameters with maybe 1-2 trait bounds. Anything more complex than that will really clutter the function signature and looks unreadable. That is a practical, not technical, limitation, which is solved by the where syntax. Also, arguably, since the impl Trait syntax was introduced precisely to cover these simple cases … is the <T: Foo, ...> syntax even necessary at all (assuming limitations like not being able to turbofish with impl Trait are lifted in the future)? It feels redundant. Need something simple and upfront and center? Use impl Trait. Need something more complex? Use where.

Besides… we just introduced impl Trait in argument position; I find it unlikely that we’d undo this move.

Yeah. I wish it wasn’t stabilized, but it’s too late for that now. Might as well live with it and make the best of it.

your Haskell Ord example

Yes, I can see how in these simple cases, a simple syntax that puts the trait bound upfront and visible is a good thing. As much as I love to hate on impl Trait in argument position, it accomplishes that perfectly. The trait bound in these simple cases is exactly where it is supposed to be: in the most upfront and center position, with the argument that uses it. If anything, as I said previously, it is the old <T: Foo, ...> syntax that feels redundant now. If you need something more complex than impl Trait, use where.


It’s exactly those cases I’m talking about where you have 1-2 important type parameters and bounds; in that sort of situation, <T: Foo, ..> fits really well and highlights what is important and not. You can even use <T: Foo, ..> and where .. together to showcase the important bounds and “hide” the less important ones in the where-clause.

As soon as you need to repeat the same type variable, impl Trait becomes useless. To fix this, you’d need some sort of typeof(argument) construct; which, to me, is less readable. It is pretty common to have two arguments using the same type variable I’d say (think anything resembling a generic binary operator). So impl Trait + where doesn’t cut it for me.


Yeah, OK, thank you for explaining your view.

I think this discussion also helped me clarify in my head why exactly we have these syntaxes in the first place and how they are intended to be used.

I might adjust my coding style :wink:


Oddly enough, I put important bounds in where and reserve the <T: Trait> syntax almost exclusively for trivialities like T: ?Sized or bounds that come from the struct definition. Why? Because generic parameter lists mix two different kinds of information (making them harder to read), and can’t even hold all the important bounds.

Or I try at least. Sometimes I throw consistency into the wind and inline a where bound because it is “short” and there is only one type parameter, meaning it’s not unreadable.


What was the reason it was introduced again? I vaguely remember detractors essentially saying “why?” and proponents saying “to cover the bases of the possible ways people may want to do this”.