Priorities after 1.0

I think it is important for you all to make sure that whatever you do, pick some mantra (e.g. “zero-cost abstraction”) and try to stay laser-focused on it. Do something very well rather than a bunch of random things sort of well (this is a vote against random language features that feel neat / smart; mostly stuff in the “longer term” section that feels a little like PL-envy).

My experience as a mid-term user (4-5 months?) is that what I like most about Rust is that once I understand what I need to do I can (usually) write it and it just works. I want more of that (meaning fewer cases where I can’t write what I want, and more “just working”).

My main pain points recently have been

  • Non-lexical borrows: fighting with the borrow checker is mostly a thing of the past, but not entirely, and this such an important part of Rust that making it work smooth feels critical for understanding whether it is progress or not. This is a case of “I could write it, but it didn’t work”.

  • Specialization: When this bites you it is really annoying. Having a story here would make (for me) lots of programs more general and easier/possible to write.

  • Generic functions for boxed trait objects: I don’t even understand if this is an object-safety thing or just a “whoa, you would need too many methods” thing, but it is a point where the JITed languages I used to use clearly win out, and (as far as I can tell) I just can’t write the thing I wanted to write.

  • Performance/Codegen/Fewer ICEs: This isn’t meant to be a dig, but to the extent that adding new features cuts in to the experience using current features, I’d almost rather you not do it. I think we all know what happens if you measure “progress” by number of features added.

  • Tooling: This is always good. Early pain points coming from Visual Studio were largely resolved, but I still type way more than I think I should. I’m also no good at using profilers yet, but I don’t know enough to understand if that is me or the offerings.

If Rust just literally stayed the same, modulo some bug fixes, I think I would be pretty happy. I wouldn’t rush to add crazy new features, but rather just take a breath, watch its use grow a bit, and see what people need (with caution against “faster horse” needs).

5 Likes

“Poor support for arrays” / “Type level integers”

Yes, this! I would really like to see Rust’s story for array math computation improve. The field of control systems is dominated by engineers using MATLAB to generate C code for their controllers. My gripes with that workflow are off-topic, but suffice it to say that I would love to see Rust get good enough at array math that it can be used to put control system design and implementation on a safer, more unified foundation.

1 Like

How about deprecating syntax extensions and procedural macros, focusing instead on compilation speed and tooling?

I think compilation speed and tooling will mainly be done by community, while language features are mainly done by the core team, so they don’t really conflict.

1 Like

I hope compilation speed is also a goal of the core team!

1 Like

I haven’t filed bugs about the crashy-Drop. I’ve asked on IRC about it and learned that Drop and #[repr(C)] are not supported together, so I’ve refactored my code to be less fancy and not use Drop on C types.

How about deprecating syntax extensions and procedural macros, focusing instead on compilation speed and tooling?

Why does deprecating support for these help with compilation speed? They aren’t even officially supported yet (since they’re not considered anywhere close to stable) so I’m not sure how they could be deprecated to begin with. Some compiler plugin machinery is also required to support features of the stable distribution like the built-in lints and rustdoc.

(To be absolutely clear here, I did not interpret “procedural macros” to mean macro_rules! macros; I believe that syntax extensions and procedural macros are supposed to be called “compiler plugins” now, to distinguish them from macro_rules! macros and similar future features which are not as powerful as compiler plugins)

1 Like

In my opinion, Rust’s type-system would benefit from a couple of extensions and find it a bit worrying that people seem to put so much value on stability that even not developing new features at all is advocated.

Mostly this concerns higher-kinded types, variadic generics, compile-time function execution and parametrizing types over values. All of this imho fits nicely into the zero-cost abstraction story, as it allows one to build more specific types. There’s tons of use cases that currently require run-time checks or implicit invariants of a types internals or awkward macro constructions to generate a finite subset of what you actually want. These could, in principle, be expressed more easily if the type system would advance.

btw, I’m not saying we should rush these features but they are imho objective improvements over the status quo and the language feels as if they ought to be possible (at least to me).

4 Likes

While I agree with most of the above for the long term, I really oppose the “compile-time function execution” bit. Frankly, it’s just a redundant hack. Consider that Rust already has experimental compiler plugins and macros (which supposed to get much needed love after the release of Rust 1.0). A rust compiler plugin is implemented in regular Rust code and its run-time is that of the compiler. I don’t need to sprinkle my code with redundant contexpr as in c++ or just hope it runs in compile time based on some obscure restrictions as in D. Instead, I just need to run my code at the correct stage. Rust already has some convincing examples of this, e.g. the regexp! macro.

Granted it is currently complicated to write and the APIs are unstable in Rust but here’s an example from Nemerle which has similar facilities to show possible opportunities for future Rust:

  Nemerle.IO.printf ("compile-time\n");
  <[ Nemerle.IO.printf ("run-time\n") ]>;
}```

Most important bit above: both calls use the exact same ol' **"runtime"** printf function.

I'd like Rust to continue explore this kind of meta-programming and learn from othel languages with hygienic macro systems instead of the C++ dichotomy where you have two languages in one (templates vs. functions) or two evaluation schemes (constexpr vs. regular).

The problem is that without it you can’t have constants of structs with private fields.

I see that there is some redundancy and I wouldn’t advocate to use CTFE in place of macros, but rather in those places, where macros are just a workaround for their absence. And I think the last part about value types really is impossible without some form of CTFE. And then the question is, whether you really want to allow effectful code to be executed by the compiler. (This is what motivates something like constexpr, I guess.)

msvcrt.dll is an MSVC binary; it’s one that’s compatible on all Windows since XP SP3 and doesn’t need to be redistributed with any software (since it’s present already).

Having choice is indeed good. However the Python-on-WIndows community have found that supporting only MSVC has cut them off from a huge world of FOSS scientific software, to the extent that if you want to use Python in a scientific setting, Windows is not a good OS to do it on. It would be a shame if Rust went down the same path. My fear is that sometime later, to ease the burden, someone will propose removing mingw-w64 support and if accepted, you’ll be at that bad place.

I didn’t follow what you mean by “value types” and I don’t see what is impossible without CTFE. I also don’t understand the part about the compiler executing effectual code - i don’t see how that’s interesting from the perspective of the end user.

consider two options:

  1. compiler runs a transform from “my code” to “executable code” without side effects in the functional sense as compile(source) -> executable. This is trivially can be used to implement a program that when run outputs a new modified program, let’s call it “source1”. This can be iterated as much as the user wants. finally compiling source_n into desired executable.
  2. the compiler allows to load additional transformations which modify our source to 'source1` and generates directly the final product. It automates the process from #1 for the user.

The differences between #1 and #2 is simply that 2 is more convenient for the end user. also, the “effectlessness” can be achieved without forcing the user to iterate the process manually, for instance in java processing the annotations is in fact implemented as iteratively generating new source until no annotations are left and than producing the executable out of the modified source_n, so it does not allow modification of compiler’s AST. To summarize, it’s only an implementation detail of the compilation process.

What about rust-lang/rfcs#323 ? As the RFC has been postponed, I think it has its place in this list. :smile:

2 Likes

msvcrt.dll is not the C/C++ runtime that applications are supposed to use. It is for the operating system only. Microsoft is forced to keep it compatible due to all the applications that are built using it in spite of Microsoft’s insistence that you use the proper versioned C/C++ runtimes (more info). We depend on very little from the CRT as is, so it would be fairly easy for us to lose the dependency entirely and end the question of msvcrt.

There is no intention to cut off support for the MinGW toolchain on Windows. Everyone coming to Windows from the Posix world is likely to use MinGW, so any proposal to remove support for that will likely be shot down pretty quickly. If I’m still around when such a proposal does pop up, I’ll be one of the first to reject the proposal. So since there’s no fear of MinGW support being dropped, let’s embrace the future where we support multiple toolchains on Windows.

msvcrt.dll is the only backwards compatible C runtime, and as pointed out on the linked page, Microsoft work to ensure that it remains that way. The others have redistribution limitations that make them incompatible with programs not compiled with MSVC.

The problem here is that by supporting a second crt-incompatible toolchain, you fork the world of Rust libraries into distinct and incompatible sets. This is why Python has a large problem with Fortran on Windows. I would urge the Rust project to think about whether this is worth doing for the “reduce our dependence on mingw-w64” goal and what achieving this goal actually gains you.

rustc should not depend on a particular C runtime. I think libstd can be ported to the Win32 API.

I was talking about something like RFC #884 and in particular more advanced developments there upon, like allowing a type to be parametrized over a value of a more general type (i.e. bound by some trait). In any case, the compiler would have to be able to evaluate certain expressions at compile-time for successful type checking.

I think the HKT in Rust would really shine when building libraries and collection types, which is where it is probably a benefit.

2 Likes

I very much agree with the prioritization, it sounds very exiting.

Beside that I hope that famous #6393 gets a high priority. I am convinced it will have a big influence on the user experience and am afraid than many people will get the wrong impression that Rust is “too complicated”.

Beside that it is sometimes very annoying to work around this limitation. It can require extensive rewrites of code which make the code much harder to comprehend. More importantly something it is impossible to work around it without having to change the API. API changes just to work around limitation in the borrow checker are not a good thing and I’m getting a little stomachache from having to use transmute or raw pointers just to temporarily mediate the problem.