Thoughts on pattern types and subtyping

You're right, sorry for the noise.

I have a question about arithmetic operators. Let we have this code:

let a:  (i32 in 1..3) = foo();
let b: (i32 in 1..3) = bar();
let c = a+b;

Which is the type of result? Logically we can predict that result will have bounds [2..5), so would result have the same type as operands (i32 in 1..3), or bounds will be widened (i32 in 2..5)?

Also the next example:

let a:  (i32 in 1..3) = foo();
let b: (i32 in 2..5) = bar();
let c = a+b;

Is this code valid? Operands have different bounds, but the same super-type, can we add them?

My hope is that yes. There are two approaches to making it work:

One possibility is that neither of the pattern types implement Add, and they get converted to their supertype as described in this comment. In this case, the result of the addition has type i32. This is the less powerful option, but also the simpler one with fewer issues to resolve.

The second possibility is that i32 in 1..3 does implement Add directly. I have no idea what the impl block would look like, haven't thought about that. <i32 in 1..3 as Add>::Output would have to be i32 in 2..6, or some supertype of that, as you say.

This is where I worry the lack of variance on traits may become a problem. Ideally, for S <: T, you would want <S as Add>::Output <: <T as Add>::Output. But Rust doesn't guarantee that today, and there might already be impls of Add out there that don't adhere to it. Maybe that kind of trait variance isn't actually needed to make this feature work, though? I'm not sure.

In any case, option 2 should be backwards compatible to introduce on top of option 1, so option 1 makes sense for an MVP.

nitpick: the syntax is u32 is 2..6 not u32 in 2..6, every time this feature has been discussed over the past ~1+ year has been using is not in :stuck_out_tongue: unfortunately the initial version of oli's post messed this up. Option<u32> in None doesnt really make any sense, but Option<u32> is None imo does.

I know, I know, oli already said so above. But I actually like in better :stuck_out_tongue:

why do you prefer in?. Option<u32> in None doesnt make sense to me, how is it "inside of None". is however reads naturally "this Option is a None" "this Option is a Some"

alternative "bigger" syntax was matches which definitely reads the best, u32 matches 0..10 Option<u32> matches None

An Option<i32> in None is a value of type Option<i32> that is contained in the subset of values that matches the pattern None. I get the equivalcence between let _ = and is, but I'm not sure that's really a good thing, because it makes stuff like let foo: Option<i32> is None = None; confusing imo.

(none of this is a deeply held belief, just my initial impression)

matches or matching was what I was using in my private musings before reading oli's post, but it's long and once I read the post in seemed superior.


It wasn't about the size of each set, but about the size of a value from each set.

Thr interpretation I'm using here is that of the original usage of subtyping i.e. not for lifetimes but for class-like inheritance. Using that interpretation, given an inheritance/subtype relation B subtype of A for types A and B, each value of B is necessarily also a value of A (the Liskov Substitution Principle at work).

In addition B may or may not extend As definition by adding fields. If it does, that means a value of B will be a value of A and more. That implies a value of B is larger than a value of A. But again, that excludes collections as their usage can easily distort that picture.

It's also not necessarily meant to apply to Rust types, or to Rust lifetimes for that matter, but rather a musing that the originally stated POV was somewhat arbitrary.

The thing is – the inheritance relationship in OOP languages is not subtyping in the type theoretic sense! OOP did not invent the subtype relationship, but languages such as Java that adopted the subclass/superclass terminology, which in turn confused people into thinking that subclasses are subtypes and superclasses supertypes. (The C++ terms "base class" and "derived class" are, at least, less confusing.)

If one wants to think of OOP class hierarchies from the sub/supertype perspective, then the important thing to understand is that if B extends A in Java, then B is not a subtype of A-in-itself. What it is, is a subtype of the transitive closure of every possible A (in the LSP sense) – that is, a subtype of the unbounded set of all possible subclasses of A including A itself. This agrees with the type theoretic definition of a subtype, and it is easy to see that indeed the inhabitants of B are necessarily a subset of the set of all inhabitants of A, just like the inhabitants of the refinement type i32 in 1..4 are a subset of the entire i32.

Yes, and this "extension of definition", or "implementation inheritance", is frowned upon even by OOP proponents, as evinced by the often repeated mantra "prefer composition to inheritance". Indeed, when B extends A in Java, what B actually is not a subtype of A at all, but rather the product type A × B' (where B' denotes the "extra" fields of B), just with some syntactic sugar added. And this is abundantly clear when you consider how the base class part is actually stored in a derived class instance. Thus inheritance is composition, not subtyping.

Finally, it should be noted that the way that implementation inheritance breaks the "nice" rules of type theoretic subtyping has made it necessary to invent all kinds of rules and patterns to ensure better code quality and avoid rampant violation of invariants, such as:

  • the LSP itself ("if you do this you'll break things, so please don't")
  • the template method pattern ("we're trying to guide you not to break this important thing")
  • final methods ("ok, we're done asking nicely, this part you cannot break"), and
  • final classes ("…maybe this whole inheritance thing wasn't so good idea after all").

Re: the last part, it's not a coincidence that some critical vocabulary classes of Java, such as String (or, shudder, Integer), have always been final, since the beginning.


Actually, this rule has to go; it conflicts with the current behavior of 'static bounds. It wasn't essential to the scheme, so removing it shouldn't be an issue.

Related: Flux