Difficulty with RFC 439 and multidimensional arrays

Disclaimer: Firstly, this is long, because I had trouble properly motivating this question without going into detail. Secondly, multidimensional arrays for numerics are not the only application with the sort of issue described here, but they are what I’m immediately concerned with and have examples for.

Specialized languages and frameworks for numerical work usually have multidimensional arrays, and very often have succint syntax for multidimensional slices/sections in those arrays. Just to pick a few off the top of my head, there’s NumPy, MATLAB, and Fortran 90. In Rust, this might look like one of the following options:

// "arr" is a 4x4 array of elements of type T.
// "foo" receives an object representing a view of the non-contiguous
// 2x2 block in the middle of this array (without copying).
// C-like syntax
foo(arr[1..2][1..2]);
// Less conventional and intuitive, but also less reliant on inlining.
foo(arr[(1, 1)..(2, 2)]);
// Yet another possibility (but all of these have the problem below).
foo(arr[[1, 1]..[2, 2]]);

Before I get further into this, I want to get two things out of the way:

  1. Good multi-dimensional arrays need to be implemented by a library right now, since nothing provided by std is a good substitute. One reason is that for many (most?) applications it is desirable for array size to be set when an array is constructed (not dynamically varying like Vec, but not fixed at compile time like built-in arrays). Another reason is that multidimensional arrays are not really the same thing as arrays-of-arrays (due to both layout conventions and the operations that it makes sense to implement).

  2. Most languages with multidimensional array slices also have “strides”, e.g. taking every third element from an array. I’m just going to ignore this for now, because I believe that it’s less commonly used, less performant, and less important to have syntactic sugar for.

Now getting to the point, if it was possible to implement one of the slicing examples above, what type would the result of the “slicing” have? It would be multiple items of type T (so not &T), but not contiguous (so not an actual slice, &[T]). It should really be some new struct, e.g. Slice2D<T>. (Or if you can leverage the type system to encourage optimization or SIMD vectorization for piecewise contiguous slices, it could be something like Slice1Contig2D<T>.) This first iteration works OK for C++, but Rust has lifetimes, which are an additional wrinkle.

Let’s look at a very simple (toy) case where a struct, rather than a reference or (built-in) slice, is returned from an indexing method:

struct MyPtr<'a> {
    data: &'a uint
}

struct MySingleInt {
    item: uint
}

impl MySingleInt {
    fn index<'a>(&'a self, idx: uint) -> MyPtr<'a> {
        assert!(idx == 0,
                "Out of bounds index; MySingleInt has only a 0th element.");
        MyPtr{data: &self.item}
    }
}

fn main() {
    let x = MySingleInt{item:5};
    let y = x.index(0);
    println!("{}", y.data);
}

Compare to the trait in RFC 439:

pub trait Index<Idx> for Sized?  {
    type Sized? Result;
    fn index<'a>(&'a self, index: Idx) -> &'a Result;
}

One problem here is that the Index trait requires a reference to be returned. This is troublesome because in this example we want to return a newly constructed wrapper object, not a reference to any pre-existing variable.

However, the bigger problem is that of specifying the lifetime. In the trait, the Result type must be specified separately from the index function. However, we want Result to be parameterized by the lifetime 'a.

This leads me to a three related questions:

  1. Is this soluble without changing the definition of Index, or am I correct in thinking that this is like the long-desired “Iterable” trait, in that a solution requires HKT?

  2. Is there an acceptable solution to this if we change the definition of Index, and given HRTB? E.g.:

    pub trait Index<'a, Idx> for Sized? {
        type Sized? Result;
        fn index(&'a self, index: Idx) -> <Self as Index<'a, Idx>>::Result;
    }
    
  3. If neither of the above is possible, is there something else that would work that would be backwards-compatible post-1.0?

2 Likes

Your reasoning is pretty much correct, the usefulness of the Index trait is pretty much limited to (contiguous) slices and strings.

(1) Sadly, this issue won’t be solved by just changing the definition of the Index trait. A proper solution requires HKT and a new IndexHKT trait. Let me elaborate:

The current index desugaring goes from arr[idx] to *arr.index(idx) (note the dereference operator). This let’s you use &arr[idx] to trigger the Index trait and &mut arr[idex] to trigger the IndexMut trait. For this reason, you can’t change the return type to simply Result (you’ll end with the dereference of Result), you need either &T or &mut T for it to work properly.

In the future, I expect we’ll get an additional IndexHKT trait:

trait IndexHKT<I> for Sized? {
    type Output;  // `Output` is a "type constructor" with kind "lifetime -> type"

    fn index_hkt<'a>(&'a self, I) -> Output<'a>;
}

And the expression arr[idx] would desugar to arr.index_hkt(idx) (note, no dereference). This has it’s own complications: we probably don’t want to allow implementing Index/IndexMut on types that already implement IndexHKT (and viceversa) to prevent ambiguity in the desugaring; and there is also the question of under what conditions should the mutable IndexMutHKT desugaring trigger, etc.

(2) Your proposed Index trait with the 'a lifetime parameter will do what you want: you can “glue” the lifetime of self to the output. But from experience, having a lifetime as a generic parameter of a trait is troublesome, impls work fine, but using the trait as a bound ends up propagating the 'a parameter everywhere and it gets hard to scale. The IndexHKT trait shouldn’t have this problem.

(Also HRTB only applies to traits, it doesn’t give you type constructors)

{3) Adding the IndexHKT trait with its own desugaring should be a backwards-compatible change as it wouldn’t break existing uses of the Index[Mut] traits.

TL;DR No sugar for mathematical libraries until we get HKT

PS Some people have suggested using the Fn* traits to work around this issue, instead of arr[start..end] you’d write arr(start..end) (which desugars to arr.call(Range(start, end)), no idea how well that works though, haven’t tried.

HTH

I think you were answering slightly different questions than I was trying to ask, but you still told me what I wanted to know, so thanks!

When I’m experimenting with library designs post-1.0, I’ll just write out methods the long way and hold out for HKT for a while. But I think it would be very helpful (obviously for ergonomics, but even just from a “marketing” perspective) to have sugar eventually.

The Fn* workaround is similar to a common workaround in C++ (which I’ve implemented myself before), and that seems like it could be made to work. But I’d rather avoid that. I’m concerned that Fn* implementations for this purpose might end up clashing with blanket impls in some cases, and using indexing/slicing syntax makes more sense in the long run than function call syntax anyway.

The Go community beat their head against the wall on multidimensional arrays back in 2013. It didn’t go well. The big problems all relate to slicing. Slicing for multidimensional arrays can be either simple but limited by the memory representation, or very general but complicated and slow. The Go crowd was unable to achieve a consensus.

It would be nice if Rust had multidimensional array capabilities at least as good as those of FORTRAN. Not getting hung up on the slicing issue seems to be the key to making forward progress.

Suggested goal: make it straightforward to translate Numerical Recipes into Rust, with efficient code comparable to C speed.

(There are good, well known optimizations for built-in multidimensional arrays which are not available for arrays implemented with functions, macros, or arrays of arrays. Knowing all the rows are the same length allows hoisting subscript checks out of inner loops. Optimizations for efficiently subscripting along the “wrong” axis of a 2D array are well understood, and date back to Backus’s FORTRAN compiler of 1954.)

2 Likes

cc @blake who wrote https://github.com/bluss/rust-ndarray.

1 Like

What about a trait which takes self by value, is implemented for references, and the compiler searches impls after applying an autoref? IIRC @glaebhoerl was proposing something similar for Deref/DerefMut.

Another thought would be to allow writing T: for<'a> Trait<'a> as T: Trait and impl<'a> Trait<'a> for T as impl Trait for T - among other things, we could use to turn Iterator into StreamingIterator<'a> without breaking any existing code.

@eddyb Hmm, not sure what you’re referring to, could you jog my memory? (I don’t think searching impls is something that would be in my nature to propose…)

ndarray is cool, it implements numpy style slicing with strides etc, but I’m not sure it’s very useful for numerics unfortunately. Maybe it’s a good prototype! It would be nice to try integrate BLAS to see what changes are needed to ndarray’s representation.

Do you think it’s viable to provide a high performance multidimensional array type instead of just a matrix type?

In general I don’t really mind having to use methods instead of syntax for slicing. An unfortunate fact however is that indexing is using tuples.

Indexing looks like this:

let a = arr2(&[[1., 2.],
               [3., 4.]]);
a[(0, 0)] = 7.;

And slicing like this:

use ndarray::Si;
a.slice(&[Si(0, None, -1), Si(0, None, -1)]);   // an array view with both axes reversed

A better syntax for slicing could be something like:

a.slice(((..).stride(-1), 1..));  // magic needed to associate a tuple of the right arity with a matrix shape of the same ndim.

I should clarify that I do want to define multidimensional arrays that are competitive for numerical operations with built-ins for other languages (e.g. Fortran 2008). That was most of why I brought this up in the first place.

However, I think that in order to get to that goal, this really cannot be done properly without knowing at compile time the order of the array (the number of index dimensions). And that in turn requires either a lot of macros to generate type and function definitions (up to some arbitrary order, say 32), or integer-dependent types (note the thread here).

I’m working on the “lots of macros” strategy, but ultimately I think that we won’t be able to do nearly as well as we could from a usability/ergonomics perspective until some post-1.0 improvements to the type system arrive. The combination of the way indexing works and the lack of dependent types make a lot of things much less elegant.

On the plus side, Rust might be able to provide some guarantees that are potentially very useful for optimization, which numerical libraries in C++ cannot provide (without compiler extensions). E.g. we know that two &mut references do not alias the same array (or overlapping views into an array) in safe code.

rust-ndarray does have a type parameter that indicates the number of index dimensions statically. It is limited to 12 dimensions because higher tuples don’t impl Eq, Clone etc.

I used the wrong verb, sorry. I meant the compiler would look up implementations on reference types, not on the actual unwrapped type it’s trying to deref.

I vaguely recall discussing with you in IRC about an unified Deref trait, implemented for &'a T -> &'a U to get the current Deref behavior and &'a mut T -> &'a mut U to get the current DerefMut behavior.

Just by the way, being in std or not makes no difference here: std is still just a library, and implementing multi-dimensional arrays there meets the same problems as implementing them externally.

Ah, it’s been a few months since I wrote the OP now, and I think I messed up what I was trying to say in point 1 near the top. I was trying to point out that I don’t consider [...[[T; M]; N...]; Z] (or Vec<Vec<...Vec<T>...>>) to be a good approach to multidimensional arrays. It wasn’t really about std or the language design, so much as an off-topic response to a few comments I’ve seen where people seemed to be assuming that [...[[T; M]; N...]; Z] was the type that a multidimensional array should have.

Ah, what you're probably thinking of - which I did mention at some point - is that if we had a way to be generic over the capabilities of reference types (shared, mut, move, out, ...), we could potentially unify the Deref hierarchy (as well as the Fn hierarchy, and potentially other hierarchies) into single traits with capability parameters (or associated capabilities). But this is somewhat pie-in-the-sky and it's not entirely clear how (or whether) it would work.

The part about impling for reference types sounds different, and might've been someone else's idea?

The fastest linear algebra libraries that I’ve tried are C++'s Eigen3 and Blaze. They “beat” BLAS, LAPACK and Fortran every day for complicated expressions because besides automatic vectorization and parallelization they use their domain specific knowledge of the whole expression to perform domain specific optimizations (e.g. automatic kernel fusion is what beats “naive” BLAS). To achieve this they use HKTs to build up linear algebra ASTs and perform optimizations (kernel fusion, no-op removal, coalescing memory access…) before generating the code for the computation.

They also use type-level integers extensively. For example, they use them on matrix types for number of rows/columns, maximum number of rows/columns (for stack allocated dynamic matrices of bounded size), and row/col-major order. On view types they encode the bounds of the view if known at compile time, and information like is this a “matrix” view, or an “array” view, …

I don’t know how domain specific optimizations could be done in Rust without macros, but type-level integers is a promising start that should already allow implementing way better libraries than what it is available right now.

For a “not awful” way of building EDSL inside the language, one can take a look at C++'s Boost.Proto (or its C++11 version which is at github.com/ericniebler/proto0x). It is a library for building EDSL. I don’t know how C++14 constexpr affects its design tho.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.