Setting our vision for the 2017 cycle

I have not ‘proposed’ to remove anything from Rust. I have read the book/manuals/&c. I understand the language’s design. All I have said is that 1) Rust documentation is not beginner-orientated enough, that is, it bogs the reader down in meta-programming too early-on, and 2) the generics syntax is hard to grok.

I’d like to suggest that you are dismissing the feedback from a interested Rust patron in a thread that is all about, to quote the OP: “Rust should have a lower learning curve”

It saddens me that you are being so elitist :confused:

I see no reason for it not to be possible to translate your C macros to Rust. What challenges did you encounter? Could you give an example of your C code?

Rust uses LLVM for vectorization and will continue to do so for the foreseeable future. Unless your vectorization is inhibited by an aliasing problem (which I doubt, if GCC optimizes it correctly), rustc can’t be any better than clang. Do you have code examples?

The generics syntax is not going to change post 1.0. Deal with it.

Also, in Rust, generics are not meta-programming - lifetime generics are fundamental to ordinary programming.

Note that cargo does not need the internet strictly, but it will automatically try to download missing packages.

You can use cargo with alternative ways of supplying dependencies, one of them being cargo vendor: https://github.com/alexcrichton/cargo-vendor

This is going to improve with additional tooling becoming available. Help from people with specific needs is always appreciated, because we cannot anticipate needs no one mentions.

May I suggest that there’s also the possibility that @pcwalton just misunderstood the angle you were coming from?

I don’t think that this was meant to be elitist. It’s just that there’s absolutely no possibility that generics syntax will change at this point. It doesn’t matter if this would improve the learning curve, that ship has sailed.

Apart from this, angle brackets are used for generics in most mainstream languages. I fail to see how the learning curve would be improved by deviating from the most popular syntax. And personally, I don’t find the parenthesis-free syntax of Haskell easier too read.

2 Likes

Something I’m interested in is a particular point that my co-speaker and I included in our rustconf talk, “the playrust classifier”, which describes the current state of Rust’s ML community. In essence, we stated that the ML/numeric ecosystem is fragmented, with many crates providing similar functionality but different APIs. For example, matrix implementations are different between crates. This limits natural interoperability between crates that are in the same domain. Interoperability is a great feature of a language like Python that, for example, builds many of its numeric libraries around the numpy array. I’d be interested to hear if others have experienced similar crate fragmentation in domains other than ML/numerics.

How do we suggest and/or standardize core data structures/objects to be used across crates that operate in the same domain? Is it a top-down process, in which the core team evangelizes particular projects? Or is it bottom-up, where it’s up to the developers of a fundamental project to gather others around it? Can rust have formal “core” projects in each technical domain that we suggest developers rally around to build on top of? How do we as a community decide that a data structure or object should be standardized? Can we do this in a way that promotes interoperability without destroying competition?

2 Likes

@aturon we talked about this today so would love to hear your thoughts on this!

I’ve started a thread on crates.io discoverability and community engagement: Crates.io Discoverability and Engagement: starting the conversation

My proposal touches briefly on the fragmentation issues brought up by @pegasos1; the idea is domain-specific “working groups” similar to Rust’s various teams.

Note that Python did not standardize on numpy early. numpy is the result of many failed older projects. Some of it’s predecessors wore written by core teams staffed by the biggest names in python including Guido van Rossum. The second attempt was Numeric but it did not serve all use cases so Numarray was released. Many packages supported both for a long time. NumPy was a N+1 atempt to salve it. While not the first, it was successful. (Note I was not involved then. This is based on interviews of the involved parties.)

TLDR. NumPy is a great example of the advantages of a unified interface. But it is not obvious how a community gets to that state quickly. Python did not get there by the top down method, despite trying. And the bottom up method sometimes takes a long time.

Are you proposing to alter the documentation, or alter the generic syntax?

Docs could be improved in some way (never enough docs!), but syntax changing is nigh impossible. Because of Rust backward compatibility, changing Rust generics syntax means Rust goes 2.x. And that is something devs have stated they will never, ever do. The syntax was agreed to a looong time ago and as @troplin said, it’s based C family generics syntax (e.g. C, C#, Java…).

Note that there was discussion about, not changing, but adding a simplified syntax (Can’t find the exact comments, but I think there were some straw-man proposals of any Type for universals and some Type for existentials in function signatures, a la print_area(shape : any HasArea) in https://github.com/rust-lang/rfcs/pull/1522 or https://github.com/rust-lang/rfcs/pull/1603). Sorry for the offtopic.

1 Like

It’s somewhat on topic, but the idea is that syntax can be extended in backwards compatible ways. Changing Foo<T> into Foo[T] or Foo T is not backwards compatible.

Just to be clear: I do not think a Rust 2.x is out of the question. But even if we did a Rust 2.x, I would not consider sweeping breaking changes! Perhaps a tweak or two. Changing away from Foo<T> to something else seems out of the question. =)

11 Likes

I would like to see a focus on making sure that nightly-only features are either

  1. Deemed not useful after experimentation and removed.
  2. Being actively iterated on towards being “done”.
  3. Promoted to the stable channel.

Right now, it seems that there are features that are clearly useful, are not being actively iterated on but, yet, are stuck in a perma-nightly limbo without getting promoted to the stable channel. Examples that come to mind include #bench, access to LLVM intrinsics and inline assembly. The underpinnings of serde also seem clearly useful but stuck on nightly, but it’s less clear to me if they are being actively iterated on. Useful features being in a perma-nightly limbo seems unhealthy.

Of the above, I care the most about access to LLVM intrinsics as a means of generating CPU instructions that are not expressible in safe Rust. Allowing the Rust ecosystem outside the standard library to use unsafe LLVM intrinsics would make it possible to develop safe abstractions in a way that would allow for more experimentation than developing features for the standard library does and in a way that wouldn’t make the standard library into a bottleneck.

Additionally, I think it would be good to get debugging in the case of the MSVC flavor of Rust on Windows on par with Mac and Linux in order to avoid situations where developers of cross-platform software (like Gecko) are shy to use Rust for areas that might have to be debugged on Windows specifically.

Also, as a matter of being competitive with C/C++ in client-side end-user apps that aren’t compiled for particular in-house/cloud server hardware, it would be good to be able to enable instruction set extensions on a per-function basis. (I’m aware that this is blocked on the underlying LLVM feature. Also, this feature wouldn’t be fully useful without access to LLVM intrinsics.)

7 Likes

As frustrating as it is to need some feature and find out it’s unstable, I don’t think there is a good solution to the problem. Prioritizing little-used existing features when there are more important new features waiting to get implemented (here, “feature” includes more than the compiler’s #[feature(..)]) [*] is a bad trade off, and simply removing everything that won’t get stabilized pronto just screws over people who need it and can use nightly, without helping the rest of the community.

And then there’s the fact that some things just won’t get stabilized ever. These are only exposed for the benefit of core, std and other crates coupled to the compiler. It’s not my decision but I suspect access to LLVM intrinsics is among those, since it would effectively require an LLVM backend (bad for alternative compilers) and furthermore the exact set of intrinsics available depends on the LLVM version (which we’d like to be able to update whenever profitable).

[*] edit: I accidentially some words

simply removing everything that won’t get stabilized pronto just screws over people who need it and can use nightly, without helping the rest of the community

I am not advocating the removal of useful stuff. Rather, I’m advocating the promotion of useful stuff to stable.

If a nightly-only feature is so useful that it cannot be removed, it should be either actively iterated on or be considered ready and be promoted to stable. If promoting it to stable seems clearly inappropriate, please iterate on it to make it appropriate. Leaving it as-is on nightly means the feature will be entrenched on nightly at which point iterating on it will be too painful, so it will need to be promoted as-is anyway eventually (or otherwise nightly becomes the de facto real Rust and stable becomes a hindrance).

It’s not my decision but I suspect access to LLVM intrinsics is among those, since it would effectively require an LLVM backend (bad for alternative compilers) and furthermore the exact set of intrinsics available depends on the LLVM version (which we’d like to be able to update whenever profitable).

Maybe “LLVM intrinsics” isn’t quite the right term. ISA-specific intrinsics as seen from C/C++ are portable across GCC, clang, MSVC and ISA vendors’ proprietary compilers, so exposing them on the same level of abstraction in unsafe Rust as they are exposed to C should be good enough in terms of not painting Rust into a corner in terms of being able to change compiler infrastructure later or to grow independent interoperable implementations. (Though in some cases, it would be theoretically possible to make the Rust exposure a bit nicer, e.g. with NEON intrinsics where the instruction doesn’t produce a result in one register but modifies the two operand registers, it would be more elegant if the corresponding intrinsic in Rust returned a pair instead and took two values by value instead of taking to pointers.)

1 Like

The problem is resource allocation. There are only so many people actively contributing to rustc, and they have to decide what is the best use of their time. That’s what this thread is all about: which ‘features’ (or things not technically a feature) should those people work on for the next year.

That doesn’t stop any sufficiently motivated user from taking a look at the tracking issue for a feature, and contributing patches to take care of the bundle of bugs that feature has accumulated.

3 Likes

No one wants these features to be on nightly (except the ones that are guaranteed nightly forever because they are implementation details). There just isn’t enough labor available to resolve all of the bugs in all of the features any more quickly.

1 Like

That sounds nice but I think it’s unrealistic and it could be harmful to have such a general policy. I think instead it’s more useful to focus on making progress towards resolving the fate of high-priority features like some of the ones you mentioned, without trying to make a general policy.

Examples that come to mind include #bench, access to LLVM intrinsics and inline assembly.

IIRC, you can do #[bench]-style benchmarking on stable using https://github.com/SimonSapin/rustc-test. I agree that it would be useful to see progress on removing the built-in benchmarking thing in favor of everybody using rustc-test or similar things. Since this would be a removal, it seems like a relatively easy thing for somebody outside the Rust core team to lead the charge on.

Additionally, I think it would be good to get debugging in the case of the MSVC flavor of Rust on Windows on par with Mac and Linux in order to avoid situations where developers of cross-platform software (like Gecko) are shy to use Rust for areas that might have to be debugged on Windows specifically.[/quote]

I agree!

This isn’t blocked on anything internal to LLVM. I’ve been doing this in ring since the beginning of time, manually. It just requires writing a bit of C and/or assembly code and doing the per-arch dispatching manually.

As long as debuginfo is enabled, debugging is actually really quite solid in the current version of pc-windows-msvc Rust. Local variables, unmangled symbols, types, everything works fairly well. The two remaining issues are getting std distributed with debuginfo enabled (or even better building std locally), as well as taking advantage of Natvis so that you can see the contents of things like Vec and String.

1 Like