Setting our vision for the 2017 cycle

Very nice! It would be neat to try and work in struct/enum support somehow so that there was corresponding niceties on the Rust side as well (e.g. exposing the Python code object in such a way that methods that exist in Python are also exposed in Rust).

This is great. Itā€™d be nice if you linked to the original comments for context too :slight_smile:

EDIT: I know, I know, ā€œgive 'em an inch, they want a mileā€

1 Like

This is a grate https://github.com/dgrunwald/rust-cpython project! But it has no windows support https://github.com/dgrunwald/rust-cpython/issues/10 becust rustc dose not ā€œdllimport on Windows/MSVCā€

Rust <ā€“> Python can use some love!

3 Likes

Wondering if a runtime for embedded systems cropped up as a suggestion? Seems to be a great use case for rust given the memory safety, etc. For sure Iā€™d like to sketch for arduino/genuino in rust.

1 Like

No problem. Iā€™ll update it in a few hours and try to keep up with new comments.

I should've been explicit about this. The order does roughly reflect the sense one gets from the survey results, but it's rough at best. I'm not sure whether a ranking of the final vision statements is a good idea, vs trying to scope them so that it's feasible to get all of their goals done in a year.

I cannot use Rust in production because its abstraction capabilities are not powerful enough to tackle the problems that arise in my domain.

My application domain is, ironically, high performance computing (numerical simulations). The main piece missing is type level values, but there are also others.

I need type level values to abstract algorithms and data structures over the spatial dimensions of whatever problem I am trying to solve, such that I only have to implement them once. Quad/Octrees, BVHs, structured/unstructured meshes of simplices or hexahedras generalize to arbitrary spatial dimensions. Most numerical methods for PDEs (finite differences/volumes/elements, DG, Lattice Boltzmann) do so as well.

Type level values also would allow better fundamental libraries for, e.g., linear algebra.

But with linear algebra we come to the second piece missing: better support for EDSLs. EDSLs make libraries easy to use, but they are extremely hard to implement in Rust (harder than in C++ and not as powerful). Anything that improves Rust EDSL story would help us implement the libraries that we need. From the top of my head, procedural macros, (way) better CTFE, variadics, and maybe HKTs would help.

There are also some issues with FFI integration. HPC code relies heavily on MPI, whose standard define some macros, that might have different types depending on the implementation used. This makes writing a rust wrapper around MPI a bit of a hassle because those macros have to be dealt with at the build system level, instead of within Rust itself.

Finally, most HPC environments don't have direct access to the internet. This means no rustup.rs, no cargo, no nothing. It would be great if there were ways to use these tools from remote systems, e.g., via ssh tunnels, or if there was an easy way to prepare the environment for a crate in some system, and ship it easily to another one (where everything gets recompiled).

So while I would love to use Rust in production for high performance work, the language currently doesn't allows us to abstract at the level that we need to do our job efficiently. IDE integration, better compile times, lower learning curve... all of these are great, but even if all of these are fixed we would still be unable to use Rust in production.

For this reason I would like to see in the vision for 2017 how Rust plans to expand to those system level/low-level programming fields that it currently struggles at. My field cannot be the only one, so it would be nice to hear of others as well.

EDIT: I rephrased the post because its 1:30am here and the points I tried to make were even unclearer than they are now. EDIT 2: I think it is actually sad, that Rust concurrency is so good for its age, and that we have so many massively parallel problems (as well as access to so many cores), but cannot use it :frowning:

15 Likes

Iā€™m glad to see FFI included in these goals, and in particular the mention of seamless #include support. In particular, that gives me hope for detecting library version issues at compile time rather than via runtime crashes. Right now, libfoo-sys crates declare the various C symbols they use, with no way of knowing if those declarations match those of the version of the library the crate compiles against.

3 Likes

Like @Eh2406 said - the rust-cpython bindings are actually really solid. Iā€™m working on a blog post ATM about how to write extension modules for Python in Rust. Still, I agree that the story there can be improved, especially if the work on custom garbage collectors makes progress and the interop with Pythonā€™s refcounting can be worked out. Iā€™ll need to take a look at how that can work with Cython too.

3 Likes

This reminds me: one of our perpetual problems is just making good stuff that exists more visible. We've added a lot of links within the web site, but it ends up being totally overwhelming.

How can we make it easier for people to discover things like Rust's Python story?

One thing that I discussed with @aturon a while ago would be building the fundamentals for a scientific Rust stack. Itā€™s not a lot of effort to get the initial core fundamentals out, but once itā€™s there it would be a central place that other disparate projects could build upon. Think NumPy ndarrays for Python, the Julia NDArray, a subset of Eigen, etc.

The core primitives could be:

  • a core ndarray abstraction
  • BLAS wrappers on top of these
  • scientific functions on these arrays (e.g. the vector operations in MKL/Accelerate/NumPy)

Some inspirations from the C++ side (that are probably overly generic but are reasonable) are Eigen and mshadow.

With these, the scientific Rust ecosystem would have a central place to build upon.

8 Likes

This is the most important thing for me. There are things which I am currently incapable of binding properly in winapi due to a lack of FFI features. Things like unions, #[repr(align(N))], #[repr(packed(N))], bitfields, COM calling conventions, dlimport, static-nobundle, unsized types with thin pointers, and more. In fact, I don't think Rust has had a single FFI feature implemented since 1.0 (the most useful thing for me that did happen since 1.0 was making glob imports actually work). Several of those features I listed even have accepted RFCs, so I have hope, but seeing them take so long to be implemented, nevermind stabilized, is really demoralizing.

This is also hugely important, not just for me but for everyone. I'm going to some fairly extreme lengths in winapi just to minimize the amount of code that has to be compiled so compile times can be fast: getting rid of almost all trait impls, making enums simple integer constants, and abusing cargo features. I only have about 10% of the code currently compiling in winapi 0.3 but it already takes 4 seconds to build. So, when I do eventually get everything updated, should I look forward to 40 second compile times?

7 Likes

I think it would nice to have a few thorough guides that cover features that many older mainstream languages have but rust doesnā€™t.

Just a few things that come to mind:

  • Working with generics.
  • How todo OOP in rust.
  • Working with Iterators.
  • How to do error handling without exceptions.

One thing we've talked about quite a bit but never quite gets off the ground is a single "derive" for common data-types.

Straw-man:

#[derive(Data)]
// equivalent to
#[derive(Copy | Clone, Eq, PartialEq, Ord, PartialOrd, Hash, Debug)
// what about Default and Zero? Should Data include them if possible?
4 Likes

This is a beautiful example of the kind of curve-bending that the Rust community has gotten so good at. Instead of just assuming that some ostensible tradeoff is a hard tradeoff, identify ways to get many of the benefits of both solutions at the same time.

That shrinks down the set of cases that aren't reasonably handled by the default (in this case, codebases with both fast-hash requirements and significant code size limitations), which makes the justification for escape valves much easier for users to understand.

One thing that could be improved with #[no_std] is getting some of the core crates (like alloc) stabilized.

Iā€™m a little confused about that one in particular ā€“ are we likely to be changing alloc::heap::allocate in the future? It seems pretty basic to me.

1 Like

I don't know a lot about this, but I'm curious why you would not want to use cross-compilation, e.g. with something like a MUSL target that has basically zero dependencies.

It feels like we have a body of data now ā€“ itā€™d be good to know what sets of traits are commonly implemented together. I feel like one of the problems with a shorthand like that is that I fear everybody has a slightly different subset in mind.

Doesn't shoggoth.rs at least somewhat cover this for now? (I admit, native support would be nicer).

1 Like

:+1: