Nascent Idea: allow `.0`, `.1`, etc on arrays


I think I prefer the [0] syntax. Both the compiler and the programmers don’t need a different syntax to manage this use case.


Conversely, to me this is a reason to prefer making the x[0] syntax work (on arrays and on slices): the “disjoint elements can be borrowed independently” logic is not affected by the possibility of panics, and the extra flexibility in borrowing is at least as useful for slices as for fixed-size arrays, probably more so since slices are more common.


If we want to follow precedent: Rust added a separate loop construct instead of special-casing while true (the reason it matters is initialization checking). Whether to special case indexing for literals feels analogous to me.


Why not simply make it easier to declare long homogenous arrays? I.e. add a syntax like (i32; 64) analogous to [i32; 64]. That way, the programmer can choose which indexer syntax and semantics they want.

Additionally, I’m not sure how useful this would be without the ability to create disjoint borrows on ranges of array data.

It’s not a new syntax, though, so arguably it would just be more consistent to make this work.


I would be against for two reasons:

  • It is weird. For me, there’s a distinction between an array (homogenous sequence thing where order has a meaning, therefore I can do things like x[i], x[i + 1]) and tuple (where there’s no notion of order, it’s just a struct with potentially different types and .0 is just an auto-generated field name). Using un-ordered auto-generated names for something with order feels wrong and mixing of different concepts.
  • It kind of special-cases something. Why shouldn’t this work for Vecs and slices while it does for arrays?

I’d be for making that (&mut x[0], &mut x[1]) or even (&mut x[i], &mut x[i + 1]) work. A brainstorming idea for that:

  • Have some unsafe marker trait (tentatively named DisjointIndexMut) that would claim that if you put different indices into it, it produces disjoint borrows.
  • If the borrow checker can prove the indices are distinct indices, it allows it.


x.N is too restricting lexically, you can’t put an arbitrary constant there (which would be useful with variadics).

Adding hacks to x[EXPR] to detect x[CONST_EXPR] and treat it differently (avoid overloading, switch to field access semantics) doesn’t feel like a good solution to me (this is also a breaking change technically).

Perhaps a separate non-overloadable operator x.[CONST_EXPR] would be more appropriate for compile-time indexing, but the motivation is probably not large enough until variadics arrive.

The separate operator could be used with arbitrary structs as well

struct S { field: u8 }

let s = S { field: 10 };
let z = s.[0]; // z = 10

except that use from other crates need to be prohibited by default to allow arbitrary field reordering without causing compatibility issues.

Idea: TryIndexAs/TryIndexMutAs for generalization over tuples

If people find it weird then that itself is an argument against doing this (regardless of why); but for the record, the idea that tuples don’t have an order is weird to me

It kind of special-cases something. Why shouldn’t this work for Vec s and slices while it does for arrays?

This question was answered:


What I mean is, tuples have order that generates the „field names“. The fields are ordered more because you have to write them in some specific order than as a desired property of the data structure. Tuple doesn’t guarantee that .1 lives on the next address after .0 as it does with array. It makes no sense to ask for the „next element in tuple order“ ‒ if you want that, you probably have the wrong data structure.

This question was answered:

That was more of an argument than a question. That if we want to have an ability to have &mut access to two elements of an array, we probably also want the same for slices, vecs, possibly hash-maps. That if there’s enough motivation to solve the issue, the solution should be applicable in general. I don’t want to leave poor Vecs out of the party :innocent:.


Allowing this on slices would mean that “field access” would throw, breaking the parallels this is trying to set up.

Is this a big issue? It would certainly be weird if evaluating foo.0 could panic, but it’s not hard to understand how it makes sense. I think people could get used to to. Having this feature for arrays but not slices would also be weird and would probably be quite frustrating in practice.

Do you have any suggestions for how we could possibly make this work generally though? How can the compiler know that foo["wow"] and foo["bar"] evaluate to two different things when .index can be an arbitrary function?


I’ve seen some users asking “how to get length of a tuple”, so the line between arrays and tuples is already blurry.

So I guess the key question is: do we want tuples to be more like arrays, or clearly distinct from arrays?


I would prefer keeping them separate. Mostly because if you add length function and .0 to arrays, people will move on and start asking for a for cycle over a tuple, or a for cycle whenever the tuple happens to be type-homogenous.

Do you have any suggestions for how we could possibly make this work generally though? How can the compiler know that foo["wow"] and foo["bar"] evaluate to two different things when .index can be an arbitrary function?

On which level of „How“ are we talking? Above, I proposed an unsafe marker trait for that, so implementation could promise to always return two different things when the indices are different. But then, we might want to go one step further and make such marker trait work for .get("foo") and .get("bar") too, or even further an arbitrary marker trait for custom function too. And for that, I do not have an idea, but someone else might, maybe?


Well, i might be wrong, but i actually think the ability to write for cycle over a (heterogeneous) tuple is mandatory for using tuples to solve the variadic generics problem, if that’s the plan…


It could go either way. It could also be really annoying that after getting used to the compiler correctly erroring when you .3 on a [T; 3], you use .3 on something that turns out to actually be a [T] and you’re confused that it didn’t error.

The interesting parallel I see here is when we made tuple structs “desugar” into a function and a normal struct that has fields named by positions. In some sense, this proposal is to “desugar” an array into a tuple struct and an Index(Mut) impl and a #[repr].


Isn’t that exactly why we have arrays.


@storyfeet It will be great ability to transform heterogeneity into homogeneity.


No. The “variadic generics problem” is how Rust will permit users to write generics with a variable number of type parameters, as C++ does:

template<typename T> T adder(T v) { return v; } 

template<typename T, typename... Args> T adder(T first, Args... args) { 
    return first + adder(args...); 

“Heterogeneous” means that the type parameters may be different types.

Arrays cannot solve the variadic generics problem, and cannot be heterogeneous.

Edit: the reasons we wouldn’t just adopt something like the C++ solution are that, first, it cannot work without either function overloading (to provide a base-case and terminate the recursion) or const-if (to manually handle the base case within the same function); and, second, it’s just not very clean or easy to work with. (C++ has some fairly ugly standard library features to emulate iteration over the types, but it’s…still not wonderful to work with.)


:-1: I think having two syntaxes for the same thing would be confusing to new users. It’s sort of like NLL. Given a choice between:

  • requiring explicit syntax to express disjointness, in order to make the borrow checker simpler to describe (pre-NLL, adding extra {} scopes to express disjointness in time; in this idea, using a new syntax to express disjointness in space); or
  • just making the obvious thing work, at the cost of a more complex borrow checker

…the former has an intuitive appeal, but it turns out the latter is more ergonomic.


I think having .0 kind of syntax on arrays is a workaround and what we really want is to have the compiler not complain when we mutably borrow a provable-at-compile-time-to-be-safe piece of array. I’d rather not see any workarounds in the language.


What the OP’s request for .0-like tuple-field-access notation for arrays shows is that there should be a way to view an array as a struct (and maybe also the other way around, when the tuple is obviously homogeneous but also with the good #[repr].

I’d imagine an (ideally no-op) function/macro to transform a [T; n] into a (T, ..., T) (with the right #[repr]) so that statically disjoint mutable access is possible, only to change ot back afterwards to enable array-only patterns like iteration.

Another option/pattern would be to have the following method for [T; n] forall n [e.g. n=3]:

fn with_tuple_view<R> (
    self: &mut [T; 3],
    f: impl FnOnce(&mut (T, T, T)) -> R,
) -> R;

The idea being that, provided that transmuting &mut [T; 3] into a &mut (T, T, T) is sound (I don’t know the #[repr] stuff or what the compiler constraints are regarding tuples (c.f. above comment about tuple’s order being not having to be well-defined (e.g. when optimizing for size))), this API would allow to do such cheap transmutation wihtin a safe API that allows to go back and forth.

EDIT: forall n being, at the moment, something like forall n < 128, since we don’t yet have type-level integers


I think that we could guarantee that the representation of homogeneous tuples will have the same representation as arrays of the same length.