I do want to consider it; I just want to make sure that the thing being considered is something that both can be done, would be reasonable to do, and makes sense under the int
name, and I'm not sure both apply here.
I haven't looked into this nearly as much as folks like you have, which is why I can't rule this out right now. It's just that, from everything you and others have said, there are still a lot of unanswered questions.
I talk about the type of the bounds, but honestly, the bigger issue is how exactly these integers should behave, and what "fixed precision, bounded integers" would look like in the language. Do they just never overflow, always growing bigger each time you perform operations on them? How would assign-operations work for these, or would they just not have them? Do they have overflow, wrapping in their range, in that case?
And again, we're deciding to break the symmetry for the name of a fundamental type in the language, going with uint
and sint
instead of uint
and int
to match uN
and iN
everywhere, all because this hypothetical ultimate integer type is going to eventually be maybe be designed in a couple years and then implemented a couple years later?
Why can't the ranged integer type just be called Int
? or integer
? It feels like it'll be way less used than the existing integer types for sure, since it adds a lot of complexity to bounds when most programs just choose integer types as a way of optimising the size of things in memory, not strictly bounding things in their APIs. Certainly, it would be weird that the generalisation of i32
is int<0..=0xFFFFFFFF>
or sint<32>
and not int<32>
.
Like, choosing the name int
for the signed, bit-generic integer type makes a lot of sense, and unless ranged integers feel both canonical and implementable in the near future, I don't see why we should use the int
name for them. They feel canonical mathematically, but they don't feel canonical in the way the language is designed, and that's the bigger issue IMHO.