Goals and priorities for C++

Just a note... when I initially read the paper, because of the rather large number of authors, I thought it was some kind of committee consensus document. But in fact they're mostly(?) all Googlers, and per Reddit:

It's also worth giving some context: we already had pretty clear feedback from the committee that they didn't agree with these priorities and we actually didn't intend to publish this necessarily. However, the committee specifically requested that we actually provide our write up of where we are coming from.

That's important because parts of it would represent pretty drastic departures from current practice:

  • It views ABI stability as a non-goal, but all major C++ compilers these days have a stable ABI. (MSVC moved away from an ABI-break-per-release policy starting with VS2015. GCC has stayed stable for much longer: the base ABI hasn't been broken since 2001, and libstdc++'s ABI has only been broken once since 2006, and that was because C++11 mandated it.)

  • It considers it an "open question" whether "supporting 32-bit hardware" is valuable, as opposed to the language going 64-bit only. Meanwhile, GCC (like Rust) continues to support AVR, which is 8-bit, with a 16-bit address space.

  • This:

    Our goals are focused on migration from one version of C++ to the next rather than compatibility between them.

    would not be a complete departure, since the traditional header model does require library headers to compile using the same C++ version as the library's client, and the C++ standard has occasionally made breaking changes. But even then, it's always been possible to fix the breakage while staying compatible with earlier versions, so that clients compiled with different standard versions can all use the library. On the other hand, I'm not sure how C++ modules will affect things, and then there's the epochs proposal...

8 Likes

To add another:

  • They say that "Syntax should parse with bounded, small look-ahead.", suggesting LL(4) or some such, but then parsing C++ is undecidable.
8 Likes

I personally found some priorities they make objectionable as a "value function" for Rust, although some make sense for C++ (many other values they raise are fairly generic). For example, I don't think run-time performance is necessarily more important than maintainability or compile-times if there is a conflict (but soundness always is). It also seems to me that they put too little emphasis on software maintenance, which I think Rust's type system particularly excels at (in large part thanks to exhaustiveness checking).

Leave no room for a lower level language. Whether to gain control over performance problems or to gain access to hardware facilities, programmers should not need to leave the rules and structure of the language.

I find this to be unnuanced, not taking into account that taking this too far may end up adding a huge amount of complexity to the language to expose all the plethora of knobs that a backend may provide. This is how you end up with a sea of GCC extensions. One has to balance the level of control provided against e.g., the cost to specification, the number of users interested in some aspect.

I do think Rust is, and should be, an excellent programming language for writing maintainable large scale concurrent high-performance & low-latency applications in particular. That is, Rust is great for writing a Firefox. Rust is also particularly suited for "symbolics" processing due to pattern matching, like e.g., OCaml and Haskell are. Whether Rust is the best language for application development seems like it would depend on the application. An application that doesn't have high-perf low-latency requirements seems like it might be well served by Idris.

5 Likes

I found one of the linked talks to be rather interesting. In particular, the talk expresses the need to be clear about what sort of stability you promise, and which you don't. I think we could be clearer about what we don't for the sake of long-term maintainability of the language (10-20 year time scale).

3 Likes

Yeah, my feelings on this particular question are exactly the opposite, and that's why I want to see "Rust goals" articulated and prioritized. As a goal, neither maintainability nor runtime performance is inherently better than the other, but either is better than unclear goal.

8 Likes

The reason why I think it makes sense to prioritize perf over everything except safety is that, if Rust is not the fastest high-level programming language, important systems will be implemented in a faster, unsafe language.

You can solve maintainability and compile-times by throwing more resources at the problem. But, if you hit perf ceiling of the language, you need to choose another language. Using a faster language just for hot spots doesn't really work, due to communication overhead (insightful post about this). What makes this especially insidious is that, if you might hit perf ceiling in the future, you need to choose the faster language now, as the alternative is a rewrite.

41 Likes

Stephen Klabnik gave a presentation what they thought Rust's values are, in the context of tradeoffs:
https://www.infoq.com/presentations/rust-tradeoffs/

It was inspired by a talk from Bryan Cantrill, where he talked about the values of NodeJS, Rust, C and others:
https://www.youtube.com/watch?v=2wZ1pCpJUIM

4 Likes

Regarding perf vs. maintainability or compile times, I think this is more nuanced than that and I also do not think that post regarding JavaScript vs c++ is applicable here, at the native language level.

Given that Rust is a native systems language, we already have all the required escape hatches - unsafe and lower level (intristics, embedded assembly, ..) which would allow to squeeze the last bit of perf when absolutely necessary. Nither should affect compilation time. It is true though that maintainability would suffer from these.

However I really don't see a hard conflict here as it is more a question of defaults than anything. The best choice here should adhere to the 80/20 rule, make the common case easy (and therefore maintainable) and the exceptional and more complex case possible.

Edit: And to clarify, both Rust and C++ already support this.

6 Likes

There is a tension around these things though, e.g.:

Some Rust users are very afraid of unsafe. There have been proposals for Rust/Cargo features that ban unsafe from dependencies. If Rust went this way, library authors could no longer have escape hatches without consequence of their libraries marked as suspicious and blocked by default.

Rust has chosen not to have "fast math" optimizations that break IEEE 754 guarantees, and intrinsics for that are cumbersome and insufficient. Here again are two distinct groups of users with different priorities: some absolutely rely on fp correctness, some just need couple of decimal places approximated as quickly as possible.

12 Likes

I wonder if we could have FastMath<f32> like types that opt-in to fast math optimizations. Presumably at the LLVM IR level it's not required to opt into this on an all or nothing basis?

2 Likes

Sure, there's a very long issue discussing exactly this.

2 Likes

I strongly agree with @matklad here (what a great way to phrase that!), and want to suggest an alternative framing for @Centril. "Leave no room for a lower level language" does not have to mean "bake every backend feature into the language." It just has to mean "keep the common cases fast and predictable, and provide escape hatches to control the edge cases."

That is, if at any point Rust forces you to consider writing some <lower level language> and linking it in, I consider that a failure. Not only because of performance, but because of maintainability. If you need that performance, the complexity is going to exist either way, and the only question is how well your tools help you manage it. If your tool (Rust) punts it to a whole other language and build system, that's way worse than some extra rarely-used knobs in Rust.

Funnily enough, this is similar to an argument I've heard @Centril use about "small" languages like Go or C (please note the scare quotes, I'm talking about a specific use of the term). You can't always make a language simpler just by removing things. Sometimes it looks simpler but is really more complex in practice.

Frankly, this is the reason I ever got interested in Rust to begin with. If it hadn't looked like it was moving in this direction I would have just stuck with C++, all the way back in 2013 when I first encountered it.

Yes- in addition to the philosophical differences in this thread and about unsafe, and the specific example of fast-math, I've also seen pushback against some of the required escape hatches! Various C FFI features, inline assembly itself, etc. But they're all still here precisely others pushed for them using logic similar to "leave no room for a lower level language."

This is also an area that I think Rust still has a lot of room for improvement. Merely having the escape hatches is not enough- they ought to be just as nice to use as they would be in <lower level language>. For example, unsafe Rust still often feels like it is intentionally ugly as a form of "syntactic salt." But this diminishes the benefit of having those features in the language- it makes them harder to read and write, even in scenarios that require them.

If we truly want Rust to be used in large scale, concurrent, high performance, and low-latency applications, then "leaving no room for a lower level language" is crucial, because those are the languages that people write those applications in! And if we truly want Rust to make that use case maintainable, then we need to make these edge cases maintainable as well. Half-baked solutions or asking people to shell out to some other tool does not do that.

20 Likes

I think I’ve got this idea from one of @Gankra’s tweets.

1 Like

Near the end of his talk @bcantrill points out that one place with a crying need for the safety that Rust brings is firmware. That problem domain is appropriate for embedded Rust, likely with a substantial amount of hardware-interfacing code in assembler. Therein lies a longer-term challenge for Rust.

The existing work (primarily by @Amanieu) on a Rust-compatible general macro-based syntax for assembly looks like it will address the front-end language-level syntax problem for firmware in Rust.

If that firmware runs on a feature-reduced variant of a common architecture, such as an ARM Cortex M0 or RISC-V RV64, then LLVM with architecture restrictions might suffice for backend code generation. In other cases there will be a need for an approachable way to retarget a backend code generator (e.g., LLVM or Cranelift) to relatively-simple new architectures. [Aside: That same approach could then be used to retarget that backend code generator to 4/8/16/32-bit legacy IoT CPUs that use twos-complement arithmetic.]

Only at the end of that long development chain will it be possible to use Rust to reduce the vulnerabilities inherent in the current firmware production process.

5 Likes

Performance is of course important for the language, but I think simply saying "after ensuring safety, we will always prioritize run-time performance over anything else" is too simple as a value function and isn't actually reflective of how the language team has made decisions. To give you an example, when we decided to extend the lifetime of temporaries around .await, we did so at potential performance costs. Another example where it hasn't been obvious whether to prioritize performance is the Read trait and uninitialized memory. Some folks have been saying that uninitialized buffers don't need to be prioritized (I personally don't know myself). Another case where we seem to have prioritized compile-times over safety (which you listed as being more important than perf) is the LLVM loop optimization soundness hole.

At a certain level of performance, the 0.3% difference in one benchmark over another stops being significant and you get diminishing returns when prioritizing for it. Rust is already sufficiently performant (even before we fully exploit noalias which C and C++ cannot) that you cannot say that C/C++ are the faster language. At that point I think it makes sense to enhance Rust's value proposition in e.g., ergonomics, maintainability, and compile-times, as we have been doing. In short, Rust is already as fast as C/C++ and at that point Rust's other value propositions are compelling in relative terms, so I don't see the risk of important systems being implemented in a more unsafe language over time. (It's much more likely that systems get implemented in C/C++ because "That's the language everyone at our company knows".)

You're arguing this in a fairly abstract way. I don't see any particular evidence for Rust having a major problem here that we need to be dealing with. But continuing the abstract discussion, I'd say that those resources are not for free, they cost both developer time and money. Usually, you can also spend those resources micro-optimizing more parts of the program, as it takes quite a while for those opportunities to be fully exhausted (to take an example, rustc is no where close to that point after many years). When hitting a perf ceiling, you can also solve that by "throwing more resources at the problem", by e.g., getting better hardware or getting more hardware. Yes, that costs money, but so does dealing with the other problems too.

In the best of worlds however, we would like to improve safety, ergonomics, maintainability, and performance all at the same time, and that is often possible.

This is a good example of where we have prioritized robustness and reliability over raw performance, and fits well with the point @yigal100 is making around defaults. We didn't make performance the default choice here. That said, I believe we could make fast math convenient to use with another set of types r32 and r64, so it's not a big trade-off.

If you expand "the edge cases" to every edge case, then in practical terms it does mean the former. I think there has to be some limits.

It's true, it can be worse for the user, but if we're talking about a niche use case, then the costs for the ecosystem, the compiler team, and designers can be greater (in time, attention, compiler maintenance, specification, and other things we'd like to do but now can't). So it seems reasonable to me that the user should bear the costs for those niche use cases in that case. I find this especially true when stability is thrown into the mix.

I must say, I feel that many haven't taken the 2019 roadmap to heart. When I initially read Graydon's blog post, I was pretty skeptical, but over time I've come to appreciate many parts of it, especially as I've realized the sheer amount of technical debt that the compiler has. So I think we need to slow down language design, and focus on quality.

I do think that a baseline expectation from a modern high-level language includes polymorphism. I do appreciate the distinction between a "simple" language and simple code. My main issue in the previous discussions you mention has really been with special cases, and not grounding design in a general framework (c.f. ad-hoc polymorphism via special cases vs. via type classes).

I think this is mainly why we're having this discussion. I'm not sure what you mean by "still here", as inline assembly is not stable (nor do I think it should be), and there are still lots of GCC extensions that we don't have.

We've had this discussion on Discord before, and I still do think that for most users, who write unsafe code infrequently, it is to their detriment if this syntactic salt is reduced in general, but I'm open to a discussion about specific cases where we can make improvements for everyone involved (and not just some users who use unsafe more frequently).

In my mind, there is no "if" here. Rust is already used in these applications to great effect. Also, most of such applications consist entirely of safe code.

3 Likes

I feel like this runs a risk of detonating into arguing, so I’ll try to just clarify what I’ve said, without responding to specifics.

Every design decision is a trade-off. When you are weighing N performance versus M ergonomics, the specific values of N and M can tilt the scale either way, regardless of the values that you have. But it is still important to discuss the overall values. Statements like “runtime performance is more important than compile times” are meaningful, because they are an instrument, however imperfect, for determining the tipping-point ratio of N to M.

14 Likes

None of these examples contradict the idea of "leave no room." It is not about any one individual feature, but about what you can achieve with the language as a whole. If await extends your temporary too far, or io::Read has too much overhead, or whatever else, the language still lets you bypass that feature and express what you want directly.

This is the original intent behind the phrase "zero-cost abstractions" in C++! If something in the language gets in your way, you don't have to use it. So "leaving no room" here really means that things with a performance cost don't force that cost on every program in the language.

This is also where the kinds of tradeoffs @matklad is referring to come in. If a feature with a performance cost is used so widely, or its alternatives are so poor, that it forces you into a lower level language, then Rust has "left room."

Keep in mind also that performance is hardly the only metric by which Rust might "leave room." Control is arguably a much larger and more direct sticking point- this is the reasoning behind complex, niche features like C-style variadic functions or untagged unions or inline assembly.

Rust is still incredibly tiny in the grand scheme of things here. Things may look rosy today, but if Rust continues its growth in supporting these large scale/etc. applications, it will encounter more low-level edge cases, and if it doesn't address them people will just keep using C++ instead.

9 Likes

Yet GCC added those knobs on the way toward becoming the standard compiler for a plethora of OSes and embedded software running on a large variety of architectures – in other words, a massive success as a project, for use cases that Rust also targets. If Rust needed to add a similar level of complexity to compete in that arena, I'd be all for adding it.

Luckily, I think that's not needed. On one hand, GCC has a habit of exposing implementation details just because it can, even when that level of flexibility isn't actually useful. On the other hand, modern embedded development has consolidated somewhat when it comes to architectures, ABIs, and binary formats; there's less demand to support weird bespoke configurations.

Also, many GCC extensions are not architecture-specific and exist because GCC did not 'own' the C language standard. Rust doesn't have that problem.

8 Likes

I understand the distinction between defaults and escape hatches / the availability of a feature for the edge-case. This discussion is not just about prioritizes in the latter though, but also about priorities in the former (in which the examples I enumerated are relevant, less so in the latter, I agree). If your language commonly skews defaults away from performance, then "used so widely" is bound to happen. Fortunately, I don't think that is true for Rust. In my view, we've struck a good balance where many idioms are both convenient, sound, and performant.

("Don't have to use [the run-time]" is not the same as "I can make use of all parts of my hardware". The former is primarily about the lack of something whereas the latter is primarily about additional features.)

As I mention above, the tradeoffs @matklad is referring to apply equally to defaults, not just to access to uncommonly used parts of hardware.

The disagreement is indeed about the level of control here. I'm not saying that we shouldn't accept some niche features. I do however disagree with "no room for a lower level language" being used as a slogan to say that we must accept every such feature (I'm not saying that is your view). To me it's perfectly legitimate to do a cost/benefit analysis and decide that a "C parity" feature wasn't worth it, or perhaps that it needs to be exposed in a more general way. This actually feels in line with what Chandler et. al are saying:

At this stage, the primary prioritization should be based on the relative cost-benefit ratio of the feature. The cost is a function of the effort required to specify and implement the feature. The benefit is the number of impacted users and the magnitude of that impact. We don’t expect to have concrete numbers for these, but we expect prioritization decisions between features to be expressed using this framework.

Secondarily, priority should be given based on effort: both effort already invested and effort ready to commit to the feature. This should not overwhelm the primary metric, but given two equally impactful features we should focus on the one that is moving fastest.

It's probably never worth it to expose a C feature or a hardware feature exactly as it is, but that's not what anyone is suggesting here. A goal to "leave no room for a lower level language" pushes us to find a more Rust-appropriate way to give users the control they need.

Of course everything in running a language requires weighing the options, there are limits to our time, number and knowledge of contributors, etc. But that doesn't mean it's a bad goal! Having strong goals like that is exactly how you prioritize those limited resources.

3 Likes