It seems like a natural extension of how variables (immutable by default, mutable if specified) are defined to allow the programmer to dictate a specific range of allowed values for an integer. If I know a value is only valid between 0-1000 the sooner I declare that the better it is for catching bugs, off by one errors, and more…
Doesn’t Rust have enough support for operator overloading to be able to build a reasonable RangedNumber library without needing explicit language support? That seems to have been enough for C++ users over the years. I suppose it might be nice to have range checking for the builtin types available in debug builds, but I’d be very wary of putting a range constraint in code with no guarantee that it’d be enforced in any way.
Twenty years ago when I did some work in Ada for 68020s, it was general practice to put range constraints on everything that moved. I don’t remember anybody ever questioning it, even in a fairly performance-sensitive environment, but I was very wet behind the ears at the time and don’t know how many real bugs this caught in practice.
Not if you want to encode the range in the type, since generic arguments can’t be values. (Well, not without some nasty hack way to encode a number into a type. When constants as associated items are implemented, it won’t be as nasty, though…)
If bounds checking would be performed in runtime, it is not Rust way (zero-cost abstraction). But elsewhere it requires dependent types support and compiler must prove type safety in that case by proving each usage does not try to access value out of bounds. Type checking in such type system may be undecidable, because you cannot even prove equality of two types if they depends of runtime values (function that return list of words it read from stdin, what return type it may have?)