Here I'm implementing the Distribution<T> trait for an enum, so I can make later rand::random::<CellContent>(). The issue is that the compiler isn't aware that any_u32_value % 4 can only return 0, 1, 2, and 3.
Is there anything I can do rather than uncomment the line 31?
If you are asking that the compiler will accept your code, this is the right place to ask, but this feature is not easy to implement, and I think it has low chances to be accepted.
match does a type level analysis. What you're asking for is a value level analysis.
Rust generally doesn't do that. That's why while true { … } works differently from loop { … }, for example.
I suggest that you rewrite the code to not run into this problem in the first place -- for example, maybe you want [CallContent::Water { treasure: false }, CallContent::Land{ treasure: false }, CallContent::Rock, CallContent::Tree].sample(rng) or similar.
Agreed this would be amazing, but it probably stands on the shoulders of a lot of other changes and would be harder to accomplish as a goal on its own. I feel strongly that this code should compile at some point in rust's future.
The question does raise some interesting question for hypothetical pattern types in Rust. (Previously discussed in much more detail at least in Thoughts on pattern types and subtyping)
Suppose we had the syntax T @ pattern to denote the subtype of T matching the pattern.
Then with let val: u32 @ 0..4 = ..., that match should probably be accepted. (Since what would be the point otherwise.)
Next, should a literal 4_u32 have the minimal subtype it can, u32 @ 4? That would make sense since literals are also accepted where a constant is required. With the previous point, match 4 { 4 => println!("Four!") } would now compile.
Would inferred types always choose the minimal subtype as well? So that if% had signature fn rem(u32, u32 @ N) -> u32 @ 0..N (for every N), let val = n % 4; would result in the type u32 @ 0..4. Interestingly, the original example, as written with let val: u32 = ... % 4, could still receive the error because it's explicitly opting into the full range of the type.
I immediately retracted my comment because I realised that the divisor isn't (necessarily) known until runtime, so it cannot determine a compile-time type. Of course, in the example the divisor is a constant so such type refinement is feasible...
To be fair, there has been discussion of flow typing, which this is. I recall someone (I believe on here) saying that someone was playing around with it recently.
oli-obk has a prototype of pattern restricted types, but only as an improvement to the treatment of the internal #[rustc_valid_scalar_start] type attributes. I don't think anyone's actually investigated any sort of flow/refinement typing.