Older languages like C# and C++ has become so bloated beyond recognition that the older version syntax is just allowed for backwards compatibility, if we remove it from the equation it becomes a whole another language. I like C99-strict in that regards. I have left C++ for this sole reason.
I believe in this principle: Don't try to fix what is broken, since it is already broken. But create a fixed new item altogether because backwards compatibility and code maintenance in a collaborative environment will come back to bite you.
Optimizations over the current functions is fine, but adding functions which promotes a different syntax altogether should not be allowed.
I like the approach taken in Rust, only create new functions if it is absolutely necessary. But it should be extended in regards that if a decision is taken to change the way Rust code is written in the future (like C++), a new language should be written instead of labelling it as Rust.
Once the language is in a state where it is "DECIDED" that the language is good enough for it's intended purpose, one should let go and focus on improving the CPU and memory usage. One can always use crates to improve their ease of use. Otherwise just create another language for people to flock towards.
Another big difference between Rust and C++ (I don't know C# enough) is that Rust has editions, while C++ don't. This allow to clean-up the syntax without dropping backward compatibility.
In C, you can create type alias with typedef, likewise in C++98. In C++11 a new way that is better and useable in every possible situation where typedef was usable was added with the using keyword. However since C++ has a single front-end, typedef was (and as long as C++ doesn't have editions will never be) deprecated and removed from the language.
Rust has effectively 3 front-ends (Rust 2015, Rust 2018 and Rust 2021) that cohexists. So if a similar story happened in Rust, the next edition would just remove the typedef from the list of keywords and an edition-migration script would be added to clippy.
I think it's a truism that nobody wants a "bloated" language, but it's hard to definite what that means. In some sense, every feature that a person doesn't use is bloat to that person.
What heuristics would you suggest I use if I'm asked to decide whether something's worth doing or if it would just be bloat? What is it about if that made it worth having over just match? If I'm looking at let-else or let-chains, are those ok?
I'd also argue that what's problematic with C++ isn't that it's bloated, but that it lacks coherency. Rust so far is surprisingly compositional: after the initial learning hump of writing code for the borrow checker the rest of the language incrementally builds on the core to provide for a coherent design. C++ has the problem of N+1 ways to do things, all with different tradeoffs, usually around how the can (or can't) be combined with other language features.
But there's also the question if you're talking about the language, or about the standard library (or both). They go together and evolve together, but each have a distinct complexity budget and cost weighting.
Nobody wants a "bloated" language or stdlib. What people want is a stdlib/language that serves their needs, and when their needs have been changing for 25+ years of (mostly) backwards compatible changes, you necessarily end up with some legacy chaff.
If Rust manages to avoid becoming old and "bloated" in 2052 without just completely halting adaptation, then that will be an impressive feat of engineering. It's one we'd all love to see, but it's not something that won't come without work.
Stability is good, but true stagnation is imho a cure worse than the poison of (minor) instability. Rust is a marvel of engineering for finding ways to both have the cake and eat it, too that nobody else has successfully pulled off in such a cohesive package, so I'm personally quite confident in Rust's design on stability without stagnation.
Adding stdlib functions which add to the compilation time or runtime overhead (like if tokio were in stdlib). That's why we have crates
Improving the ease of use by introducing abstractions over existing functions in the native stdlib while having to keep the old syntax in place due to backwards compatibility whereas idiomatic way of writing code was something else altogether. If we care about backwards compatibility it may make the solution suboptimal just based on how the language is designed.
Have a new edition, remove the backwards compatibility as if it were a new language, include the error message so that the user knows what to change if he/she wants to upgrade to the next edition.
IMHO, if you have to sacrifice the most optimum approach for keeping backwards compatibility that means you are moving towards bloated territory. Windows is a good example of that.
As @matklad already mentioned, I am not so sure about const but async should have been a separate paradigm which should require enabling a separate flag in Cargo.toml like async = "true". I don't like having to import and create futures everytime where async can't function without returning a future. If a synchronous function is called within another async function, the output of the sync function is already tied to the async function. This necessitates unnecessary additional boilerplate which would not be necessary if we would mandate an async runtime in the main or async marked lib before trying to run a single async function. All in all, the way async and futures has been implemented is a perfect example of bloated feature.
A good example of a not so bloated system is ConTeXt which strives to keep only the necessary things even in the coming years without increasing resources usage and not messing with the existing functions. New functions when introduced do something different altogether which was not possible earlier.
@CAD97 It is not a question about compatibility, it is about the structure of the language. Maintaining compatibility is important in the coming years, but this does not mean that one should modify the language and the way it is written altogether. If it is a new type of system let's say quantum computing, we can have a quantum edition of Rust which don't care about the existing functions and provides a new paradigm to write a program while keeping the core principles like memory safety without garbage collection intact.
Different parts of Rust have different levels of these design characteristics.
Rust's type system (borrow checker, lifetimes, traits, type inference, etc..) are extremely well designed. Unsurprisingly, given that the people working on the language are experts in that particular field. So I'd agree with @CAD97 about Rust being compositional in that area.
The compilation model however is not as coherent and as compositional as the type system. The module system is non-intuitive and in addition to const & Async which @matklad already mentioned, the macros-by-example syntax is also another distinct dialect.
C++ indeed lacks coherence and is full of special cases and ad-hoc rules. But it is also bloated with lots of redundant features. Specifically ask those ones that were relevant for 80s hardware choices that don't exist today. That's also a major reason for those incoherent rules and edge cases in the first place.
For example, Rust is defined to run on 2s complement hardware. That's the only kind that exists today
C++ however, has undefined behaviour instead. Why? So it could support 1s complement.
Removing no longer necessary craft, is as crucial as being careful when adding new features in order to avoid the fate of C++.
AFAICT, this is highly unlikely to happen because changing ADL and rules that interact with SFINAE mean that template code has an ambiguous "home" for which rules to play by. Until someone can come up with a coherent and sensible answer to what happens with:
module X; is in edition A
module Y; is in edition B and calls an X API from a function template
module Z; is in edition C and instantiates Y's function template
If what X API is called depends upon something as benign as integer promotion, 0-as-nullptr implicit casts, or SFINAE of "is there a function taking this parameter" using such things, what edition's rules should apply if an edition could change these rules?
I have severe doubts that C++ will ever be able to have a "core" stable set of APIs with different sugar to access them (effectively what Rust editions do) without shedding some of its key behaviors (like ADL and/or SFINAE).
In years of watching and participating in discussion and debate about the merits of various new features, I've come to an observation thats as simple as it is unpopular: Keeping things simple (or un-bloated) requires rejecting features which would genuinely help some portion of users.
This observation is unpopular because the users who would benefit from these features are always a more vocal constituency than the remainder who benefit from the absence of features (easier to learn, to document, to hold the semantics in ones head, etc.). And almost everyone has a pet feature request
And yet, the conclusion cannot be that no new features are ever added.
This is a meaningless observation. Of course requested new features would help some users. Why else would they be requested? It's not like someone is so malicious to want to bloat up the language with useless stuff on purpose.
A better observation would be about work organisation and using the right tool for the job.
E.g. the bloated features in today's C++ were once upon a time very useful. The real question should therefore be:
Why are they still kept in the language today? What's missing to facilitate their removal?
Corollary, when adding new features to Rust we ought to ask whether these new features benefit and fit in the domains and use cases served by the language. Perhaps it is better to provide a crate rather than bake the feature into the language. Or perhaps it is better to not add the feature altogether and point people to other languages better suited for a specific use case.
"General purpose language" is imo a bit of a misnomer. If this idea is taken to it's fullest extent, the language would just become too complex to be useful to anyone. By striving to serve everyone and everything we will end up serving no one.
Or requires guiding features towards a local maximum of general usefulness so that genuinely helpful, but niche, features can exist as libraries, instead of just being rejected. I would be remiss in not dropping a link to Growing a Language in this chat, it's an oldie but a goodie.