This post is intended as a continuation of this and earlier discussions between me and @oli-obk. I will try to summarize points, so you don't have to read them.
Let's start with a simple generic code:
fn foo<N: Unsigned>(val: Val) -> Bar<N> { .. }
Now let's take a look at its const counterpart:
fn foo<const N: usize>(val: Val) -> [u8; bar(N)] { .. }
We can see a very clear symmetry here. Every const value can be viewed as a type "bounded" by usize
"trait". Bar<N>
and [u8; bar(N)]
both represent a compile time computation which returns type. Both those computations are Turing-complete and can result in types which are too big to be representable on a target architecture.
For practical reasons Rust does not have mandatory bounds for preventing undecidability or "too big" errors for type-level computations. Same applies to CTFE as well, in both cases compiler simply will abort compilation in pathological cases.
But there is one big difference between type-level computations and CTFE: an ability to panic. Imagine bar
defined like this:
const fn bar(n: usize) -> usize {
assert!(cake(n));
n
}
We could imagine a rough type-level analog of such computation:
struct Bar<T: Cake>(T);
fn foo<N: Unsigned>(val: Val) -> Bar<N> where N: Cake { .. }
Here instead of the "runtime" check we use a declarative bound. Currently we don't have a way to specify that bar(N)
will work for any N
that satisfies cake(N)
, so we need to bubble up the bar(N)
using where [u8; bar(N)]: Sized
declaration.
Note that panics cover overflow and underflow errors which are often used for constructing motivation examples. Right now we can not use explicit panics in const fn
s, but AFAIK this restriction will be lifted eventually.
To summarize: panics is the only difference between type-level computations and const expressions in types. Thus it's the only motivation for having the const_evaluatable_checked
feature. For example, n.wrapping_sub(2)
and n/8
are completely valid definitions of bar
for which the bubbled bound will be redundant (but note that n/8
usually implies that n
is multiple of 8, so reliable code should assert n % 8 == 0
).
Now one may argue that const_evaluatable_checked
is a good thing because it makes explicit that const fn
may fail and thus it improves code reliability. But in my opinion it's not consistent with how const fn
s a handled in other places. For example:
// no need for bound on `bar(N)`
trait Foo<const N: usize> {
const FOO: usize = bar(N);
}
// the bound is mandatory
trait Foo<const N: usize>
where [u8; bar(N)]: Sized
{
fn foo() -> [u8; bar(N)];
}
Both version will result in the same compilation error if we are to pass an "invalid" N
causing bar
to panic (there are reasons to track such bounds for diagnostics, but it can be done by compiler implicitly). So to be consistent we either should require mandatory bounds in both versions or in none. IIUC the former option is problematic due to the backward compatibility guarantees.
While I fully support efforts in making code more reliable, in this case I think that the const_evaluatable_checked
approach is inconsistent with the rest of the language and that in practice it will cause a lot more friction compared to improvements in code reliability to justify its existence.
How can we improve this situation? As you can see the main problem lies in panicking const fn
s, so the ideal solution would be to introduce a nopanic
property (I am not sure if it's correct to call it "effect"), somewhat similar to the C++ noexcept
operator:
const nopanic fn bar(n: usize) -> usize { n.wrapping_mul(10) }
// compilation error: `bar` may panic
const nopanic fn bar(n: usize) -> usize { 10*n }
This property can be quite useful for non-const code as well, e.g. in high-reliability code. Currently we have to use quite restricted hacks like no-panic
or inspect LLVM IR to find places which emit panics (we had to do it while working on a cryptographic library). Of course, there are some difficulties to consider, e.g. some panics may be only eliminated in optimized builds, which in turn means that nopanic
function may become panicking in future compiler versions due to possibility of degraded optimization capabilities.
Also ideally such feature should be accompanied by ability to restrict input arguments (strawman syntax):
// `cake` returns `bool` and must be `nopanic`
const nopanic fn bar(n: usize) where cake(n) { .. }
const nopanic fn baz(n: usize, m: usize) where n + m != 42 { .. }
// when used in types the bound would have to be bubbled
fn foo() -> [u8; bar(N)] where cake(N) { .. }
// non-`nopanic` expressions in types can be either forbidden
// or covered by a special `nopanic` bound:
fn foo2() -> [u8; f(N)] where f(N): nopanic { .. }
// outside of `const` context the bound will be converted to an assert,
// for example in runtime code:
bar(n);
baz(n, m);
// will be implicitly converted to:
assert!(cake(n));
foo(n);
assert!(n + m != 42);
foo(m);
The nopanic
bound looks quite similar to the Sized
bound, but it only covers the affected expression, not the whole type. Also I think it will be less misleading, since it clearly indicates the guaranteed property.
Of course ideally const expressions outside of types ideally should be treated the same way as expressions in types:
// will not compile without the bound
trait Foo<const N: usize> where cake(n) {
const FOO: usize = bar(N);
}
// bound is not needed because we pass a static value
const N: usize = 42;
const FOO: usize = bar(N);