Regarding iterators like array_chunks and array_windows:
A window of size 0 isn't allowed, so I've suggested to verify this at compile-time. Bastian Kauschke has tried (with some messy code) and failed (it works but it seems to cause problems to type inference):
So is it a good idea to allow where-conditions in const generics to restrict the allowed values ranges?
pub fn array_chunks<const N: usize>(&self) -> ArrayChunks<'_, T, N>
where N > 0 {
It's allowed in D language (in D it's more general):
void foo(uint N, T)(T[] data)
if (N > 0) {
//...
}
I guess that in the meantime it should be possible to use static_assertions in all places where N is used (and for types in new() and similar constructor).
My suggestion for this is, after the const generics MVP, allow the βifβ keyword to appear in where clauses, similar to how the βforβ keyword is used for HRTBs.
This would clarify that the expression must evaluate to true, and may be necessary to disambiguate the grammar (but maybe not).
The hard part about this however is not the syntax chosen, but how the compiler is able to reason about this correctly.
Implementing it is not such a big problem, since it can already be faked with the currently conceptualized const generics feature:
struct If<const B: bool>;
trait True { }
impl True for If<{true}> { }
where If<{N != 0}>: True
Implementation isn't really the problem, and its clear that there's a lot of user demand. However, we need to think carefully about the implications of introducing syntax for this and endorsing it as a pattern. It seems like it could expose backwards compatibility hazards for example: changing the return value of a function is not usually seen as a breaking change, but it can break downstream users if that function is const and is used as input to the type system. How do we best mitigate that?
A more limited version, for example, would be to allow only some sort of pattern matching, for example:
where N: 1..=usize::MAX
This is less powerful, but it may be harder to get breaking changes by way of this syntax than by full boolean expressions. Though of course if we allow arbitrary expressions on the left hand side, its actually equally powerful:
where {N != 0}: true
I'm also a little concerned about how complex APIs get when people get overly eager to compile time check all of the things. Even this example, though the constraint would be simple, I think its not very important that we compile time check against array chunks of length 0.
Right. I think there's an under-explored design space here, of "How do we improve expressiveness and provide complex compile-time guarantees, while keeping post-monomorphization errors rare?"
I think there's a lot to learn from the D language on that front, which picked the "maximum-expressivity" side of the scale, such that avoiding post-monomorphization errors is the responsibility of library writers.
If I had the time, I'd be interested in making a survey of popular template libraries in D (especially variadic templates), the kind of idioms they allow, and how they could be implemented in Rust in a "monomorphization-resilient" way.
As far as I'm aware there are two disparate things that are desired from where restrictions. If someone has more use cases, please give me reply as I'm very interested hearing more. The use cases I know of are:
Disallowing values which do not make sense, including preventing some kind of out-of-bounds array access, or bubbling the occurrence of such to a compile time error. One example of this would be disallowing the z() method being defined on Vectors of length less than three. Effectively, these constraints can be expressed as out of bounds errors, and it might simply suffice to improve the unconditional_panic reporting for these cases. Then, these bounds can expressed as oob array accesses.
Implementing different algorithms for different values of a const parameter. For example, maybe one algorithm is used if X % ALIGNMENT == 0, and otherwise code is added to automatically pad. I'm not sure how this would be implemented, as I'm not sure how you would deal with multiple traits as possible candidates. It's unlikely to be possible to determine such overlap a priori. Also, how would this appear in documentation?