Priorities after 1.0


As a mobile application developer, the idea of using Rust to speed up slow Java code or share code between Android and iOS looks really interesting to me. Native code is important here because even with the new ART runtime, arithmetic code sometimes runs up to 5-10 times slower in Java than in native. In a mobile environment, this can also mean more battery usage or a poor user experience if too much CPU time is being used.

Rust has a lot of advantages that could make it an ideal language for dropping down into native code. Unfortunately, the current level of support appears to consider these platforms as second-class citizens.

What would make Rust useful from my viewpoint:

  1. Compilers available to target all of the same targets possible with the NDK for Android, including armeabi, armeabi-v7a, and so on.
  2. Clear documentation and instructions on how to integrate this into an existing NDK or XCode build.
  3. Solid FFI support. As an example, currently, Rust has task unwinding but it doesn’t look like there’s any way to catch this unwinding at the FFI boundary without the overhead of creating a new task. This seems pretty dangerous and unreliable to me.

With Android and iOS as first-class targets, I would feel much more comfortable about choosing Rust over C or C++ for native components, whether used for performance reasons or for code reuse between platforms.

1 Like

Rust is not going to have any way to catch unwinding.


You should write your code in a way that it doesn’t panic but uses Result and Option everywhere. Then you do not need to worry about unwinding, as it won’t happen except when running out of memory.


AFAIK it is possible to catch unwinding in unsafe code which you need for FFI anyway.

Once you use any third-party library you cannot be sure that it wont panic unless it is promised by the author. I would be interested in statistics how many libraries are guaranteed not to panic.


What about the standard library or 3rd-party crates? It sounds unrealistic to audit all possible places where a panic can be thrown. It would be better to have a facility to catch these at the FFI boundary, which would be required to avoid undefined behavior in any case.


Ah, this looks interesting! This would help to solve one part of the FFI puzzle.

1 Like

Rust libraries should not panic except on a bug (what would be an assertion failure or segfault on C/C++).


We already turn panics into aborts on FFI boundaries.


As someone who probably wants to write pretty different code from the average Rust user (mostly expensive multiphysics code, i.e. complicated parallel floating-point number crunching), I have somewhat of a different priorities:

  1. Pretty much any features that would be useful for abstracting over “array-like” (contiguous and piecewise contiguous) containers while still retaining information “under the surface” that the compiler can use to do optimization. E.g. generic integer parameters, HKTs. Allocation on the stack is important.

  2. SIMD improvements. So stabilizing the SIMD library itself, but also features like the ability to specify a minimum alignment on a type. (Frankly, I’m kind of amazed that no one has mentioned alignment in this thread so far, since it also relates to interoperability issues like C/C++ integration.)

  3. Specialization!

  4. Distributed-memory parallelism. In particular interaction with GPUs and other accelerators (not my specialty, and probably mostly something for external libraries to figure out, but I’d like any obstacles to be removed for people who do want to work on this).

So to the extent that I work on the compiler and language, these are what I’m going to be interested in working on independent of the broader community or core team pushes. I’m not necessarily going to push the core team to prioritize these things, but I would like to see the development process be relatively accommodating to efforts that are focused on a different set of priorities, as long as those aren’t in conflict with the needs of the wider community. I’m hoping that the subteam governance changes would help somewhat with that, especially by scaling the decision-making process out so that the core team doesn’t become a bottleneck for RFCs.


Distributed-memory parallelism. In particular interaction with GPUs and other accelerators (not my specialty, and probably mostly something for external libraries to figure out, but I’d like any obstacles to be removed for people who do want to work on this).

I’ve cited one obstacle before, it has to do with generating wrappers to C libraries.

The MPI standard defines some constants, these are C macros in MPI libraries, but in each library they have potentially completely different values (and types).

It would be helpful if the C Preprocessor could be run via rustc to substitute these values automatically on FFI.

Of course one can have a script that does this before calling rust C, but that complicates the way you can use crates io to include such wrappers in your project, since it is not enough to get the rust code and compile it, but you have to preprocess the code in an intermediate step before using it, and maybe do so multiple times if you change the underlying MPI implementation in a cluster via a module system.


MIPS support is very important for people in router business. 90% of the routers are based on MIPS

1 Like

Being interested in using Rust for numerical and statistical computation, here are my wishes (in order of importance).

Extend the current range and pattern notation (..., ..) to allow left / right inclusive or exclusive bounds (..., ..<, >.., >.<)

I can see at least three use cases:

  • Inclusive/exclusive patterns are IMO required to use match with floats, example:
match x {
    20.0>.. => Ok("oversize"),
    10.0>.. => Ok("class 2"),
    0.0... => Ok("class 1"),
    _ => Err(some_error),
  • Inclusive/exclusive patterns would be useful in if expressions, to avoid writing two conditions (similarly to ternary operators), example:
if v in 0.0...10.0
  • Inclusive/exclusive ranges would be useful for instance for discretization of intervals. Example, to compute and plot function 1/x with x going from 0 (excluded) to 10 (included):
let x = (0.0>..10).step_by(0.001).collect::<Vec<_>>();
let y = x.iter().map(|x| 1.0 / x).collect::<Vec<_>>(); 

The extended notation could coexist with the current one, with .. and ..< being synonyms, or otherwise .. could be deprecated in future. This has been discussed in the following thread:

Optional args, default args, variable-arity args

This has been discussed in the following thread:

IMO one of the best languages with this regards is R, I would take it as a reference:

Tuple iteration

This has been discussed in the following thread:


let z = (x.iter(), y.iter()).map(|xi, yi| xi + yi).collect::<Vec<_>>();


let z = (x, y).iter().map(|xi, yi| xi + yi).collect::<Vec<_>>();

Iterators for multidimensional arrays

I suggested chaining two or more .iter() methods in the following thread:

However, I now think there are better options. One would be to have for instance methods .iter2(), .iter3(), etc. Method .iter2() would accept an optional parameter to specify which dimension to iterate first, iter3() would accept two optional parameters etc. Of course .enumerate() method would need to output more than one index with these methods.

Another option, IMO even better, would be to have method .multi_iter() accepting N optional parameters. With no parameters, .multi_iter() would iterate through all dimensions in order, while for instance .multi_iter(2, 1) would iterate first through the second dimension and then through the first one.

The reason why for the first dimension I wrote 1 instead of 0, is because a negative sign could be used to iterate in reverse order. This capability could be added to the one-dimension method too, therefore either .iter() or .iter(1) would iterate a vector in direct order, while .iter(-1) would iterate in revere order.

Example, to multiply each element of matrix x with each element of matrix y transposed:

let z = (x.multi_iter(), y.multi_iter(2, 1)).map(|xi, yi| xi * yi).collect::<<Vec<Vec<_>>>();

I am hoping to port some Haskell to Rust for embedded applications. But I am stuck until Higher Kinded Types arrive. So I would strongly vote for Higher Kinded Types!


Being able to refer to associated types inside type defintions would be really nice so if the restriction of no bounds on type definitions could be lifted that would be cool.


Added bonus for faster compile times: it would make it much easier to contribute bug fixes :wink: I know I’ve been put off several times from just fixing a bug myself simply because I didn’t have time to sit around and wait for all of rustc and libstd to build, just to run one run-pass test…


Note that we already have HKT in a sense, in that associated types are actually expressive enough to encode higher-kinded type parameters. (I did this yesterday, in fact, when writing Servo code for production.) So “higher-kinded types” needs to be a more specific proposal.


How would you implement trait Monad then?


That requires type-parametrized higher-kinded types, which are less certain than lifetime-parametrized higher-kinded types.


The associated types RFC walks through one possible encoding of HKTs into associated types:

closed #106

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.