From time to time I’ve found myself wanting to express some weird concept that I have never found expressible in any language i know, something that might be called “variable dependent generic types”. For example:
struct VectorSpace {…}
struct Vector<let v: VectorSpace> {…}
impl<let v: VectorSpace> Add for Vector<v> {…}
fn main() {
let a = VectorSpace::new();
let b = VectorSpace::new();
let x = Vector<a>::new(…);
let y = Vector<a>::new(…);
let z = Vector<b>::new(…);
// x and y have the same type but type of z is distinct
let u = x + y; // ok
let v = x + z; // compilation error
}
Disclaimer: this is not a (pre-)rfc, not intended to become a (pre-)rfc in the near future. I’m not even sure there is a workable idea in the design space. I just hope to get feedback about similar design issues from other rust users, how they can be addressed, if related type system concepts are already known and whether these concepts might be considered for Rust in the future.
The problem i’m trying to solve is that sometimes we have a bunch of objects depending on what we could loosely call a common context (= a common set of data that all these objects “share” and rely on). The crux of the matter is that objects related to different contexts shouldn’t mix meaningfully. Consequently any attempt to connect them should be invalidated somehow. By “connecting them” I mean using them as parameters of the same method, just like the pairs of vectors that are added in the previous example. Generally this constrain is not (and as far as i can tell cannot be) encoded in the type system. As a consequence the only possible behavior left is to make every object keep a pointer/reference to the appropriate context, then to check if contexts are consistents, and finally to fail at runtime when incompatible contexts are erronously mixed.
In certain situations, the context may be represented by a value known at compilation time. In this case rfc for const generics might be enough. But this implies two strong limitations:
- Compile time constants … must be known at compile time. This means the compiler must already know the complete list of actually used types from the abstract family
A<const K : T>
. By consequence it is impossible to create contexts dynamically. - With such an approach contexts cannot have internal mutable state: they must be pure values in the mathematical sense.
The example above may look a bit abstract but I can think of many situations where this could be used:
- Imagine that you have two iterators pointing to different items of a collection. you may want to define some sort of
CollectionSlice
representing the sequence of items between these two positions. Of course you need the boundaries to belong to the same collection. - Similarly one can consider the task of finding the shortest path between two vertices of a given graph.
- One can only apply arithmetic operations to square matrices if their sizes matches. Const generic can be used if you know this size at compile time but if you’re loading them from files or db the size may only be known at runtime. Still it would be nice to check size consistency once early on, encode it inside the type system and then apply whatever complex sequence of operations is needed.
Something that strikes me is the similarity between this kind of dependency to variables and lifetimes : Vector<b>
should not outlive the scope of b
just like Foo<'a>
should not outlive lifetime 'a
. This seems to suggest Rust may be a good contender to express this kind of idea. I’m aware many aspects would need to be clarified (How type unification works? Can we move/borrow the variable and under which conditions? Does it require monomorphization?) and investigation might possibly show that in the end this cannot work. Or there might already exist other good-enough strategies to encode similar constrains. Just asking…