As part of brainstorming 2019 priorities, one of the things I’ve been contemplating is “when do people pick homogeneous tuples (or tuple-structs) instead of arrays?” We’re actually getting to a pretty good spot there, especially since fixed-length slice patterns stabilized in 1.26.
The big difference that still exists is that the compiler understand disjoint borrows for tuples/structs, so foo(&mut x.0, &mut x.1) works when foo(&mut x[0], &mut x[1]) doesn’t. I think that the latter not working is pretty much fine – going through IndexMut and taking runtime values and such means it not working is arguably actually a good thing.
The idea: what if we let the former just work for arrays?
It’s not ambiguous, since arrays don’t have fields.
It’s clear that it doesn’t support runtime indexes
It thus wouldn’t compile if you look at an index that doesn’t exist
It’s the same syntax as for tuples|structs where the compiler understands disjoint borrows
It wouldn’t allow simultaneous &mut x.0 and &mut x[1] exactly the same as if you implement IndexMut on a tuple struct (since the latter is a borrow of the whole object, not just a field).
It’s hopefully easy in borrowck, since it’s the same logic as if [T; N] were sugar for using a #[repr(linear)] struct ArrayN<T>(T, T, ..., T);
Thoughts?
Edit after some discussion on discord:
I actually prefer the .0 form to making x[0] work somehow (for const expressions or literals or something), since I like the syntactic difference. I don’t want people writing something with constants first, then having the code break when they move it to using a variable. (That goes in the same category for me as the fact that if falseisn’t treated as unreachable, and that .field works differently from .field().)
I don’t know if this should be allowed on slices. Good point, @RustyYato: Allowing this on slices would mean that “field access” would throw, breaking the parallels this is trying to set up, so this shouldn’t be allowed on slices.
If this were implemented, when would you ever want to write a[0] instead of a.0?
Unless I’m missing something big, it seems like this would be effectively replacing today’s array syntax with two separate syntaxes depending on whether the indexes are literals or variables, and creates unnecessary confusion about tuples vs arrays.
But I have no objection to making the borrow checker detect mutually disjoint borrows when they’re this obvious (as well as making things like [1, 2][42] a compile error). I’m just not convinced on the syntax change.
Conversely, to me this is a reason to prefer making the x[0] syntax work (on arrays and on slices): the "disjoint elements can be borrowed independently" logic is not affected by the possibility of panics, and the extra flexibility in borrowing is at least as useful for slices as for fixed-size arrays, probably more so since slices are more common.
If we want to follow precedent: Rust added a separate loop construct instead of special-casing while true (the reason it matters is initialization checking). Whether to special case indexing for literals feels analogous to me.
Why not simply make it easier to declare long homogenous arrays? I.e. add a syntax like (i32; 64) analogous to [i32; 64]. That way, the programmer can choose which indexer syntax and semantics they want.
Additionally, I'm not sure how useful this would be without the ability to create disjoint borrows on ranges of array data.
It's not a new syntax, though, so arguably it would just be more consistent to make this work.
It is weird. For me, there’s a distinction between an array (homogenous sequence thing where order has a meaning, therefore I can do things like x[i], x[i + 1]) and tuple (where there’s no notion of order, it’s just a struct with potentially different types and .0 is just an auto-generated field name). Using un-ordered auto-generated names for something with order feels wrong and mixing of different concepts.
It kind of special-cases something. Why shouldn’t this work for Vecs and slices while it does for arrays?
I’d be for making that (&mut x[0], &mut x[1]) or even (&mut x[i], &mut x[i + 1]) work. A brainstorming idea for that:
Have some unsafe marker trait (tentatively named DisjointIndexMut) that would claim that if you put different indices into it, it produces disjoint borrows.
If the borrow checker can prove the indices are distinct indices, it allows it.
x.N is too restricting lexically, you can't put an arbitrary constant there (which would be useful with variadics).
Adding hacks to x[EXPR] to detect x[CONST_EXPR] and treat it differently (avoid overloading, switch to field access semantics) doesn't feel like a good solution to me (this is also a breaking change technically).
Perhaps a separate non-overloadable operator x.[CONST_EXPR] would be more appropriate for compile-time indexing, but the motivation is probably not large enough until variadics arrive.
The separate operator could be used with arbitrary structs as well
struct S { field: u8 }
let s = S { field: 10 };
let z = s.[0]; // z = 10
except that use from other crates need to be prohibited by default to allow arbitrary field reordering without causing compatibility issues.
If people find it weird then that itself is an argument against doing this (regardless of why); but for the record, the idea that tuples don't have an order is weird to me...
It kind of special-cases something. Why shouldn’t this work for Vec s and slices while it does for arrays?
What I mean is, tuples have order that generates the „field names“. The fields are ordered more because you have to write them in some specific order than as a desired property of the data structure. Tuple doesn't guarantee that .1 lives on the next address after .0 as it does with array. It makes no sense to ask for the „next element in tuple order“ ‒ if you want that, you probably have the wrong data structure.
This question was answered:
That was more of an argument than a question. That if we want to have an ability to have &mut access to two elements of an array, we probably also want the same for slices, vecs, possibly hash-maps. That if there's enough motivation to solve the issue, the solution should be applicable in general. I don't want to leave poor Vecs out of the party .
Allowing this on slices would mean that “field access” would throw, breaking the parallels this is trying to set up.
Is this a big issue? It would certainly be weird if evaluating foo.0 could panic, but it's not hard to understand how it makes sense. I think people could get used to to. Having this feature for arrays but not slices would also be weird and would probably be quite frustrating in practice.
Do you have any suggestions for how we could possibly make this work generally though? How can the compiler know that foo["wow"] and foo["bar"] evaluate to two different things when .index can be an arbitrary function?
I would prefer keeping them separate. Mostly because if you add length function and .0 to arrays, people will move on and start asking for a for cycle over a tuple, or a for cycle whenever the tuple happens to be type-homogenous.
Do you have any suggestions for how we could possibly make this work generally though? How can the compiler know that foo["wow"] and foo["bar"] evaluate to two different things when .index can be an arbitrary function?
On which level of „How“ are we talking? Above, I proposed an unsafe marker trait for that, so implementation could promise to always return two different things when the indices are different. But then, we might want to go one step further and make such marker trait work for .get("foo") and .get("bar") too, or even further an arbitrary marker trait for custom function too. And for that, I do not have an idea, but someone else might, maybe?
Well, i might be wrong, but i actually think the ability to write for cycle over a (heterogeneous) tuple is mandatory for using tuples to solve the variadic generics problem, if that’s the plan…
It could go either way. It could also be really annoying that after getting used to the compiler correctly erroring when you .3 on a [T; 3], you use .3 on something that turns out to actually be a [T] and you're confused that it didn't error.
The interesting parallel I see here is when we made tuple structs "desugar" into a function and a normal struct that has fields named by positions. In some sense, this proposal is to "desugar" an array into a tuple struct and an Index(Mut) impl and a #[repr].