About const expressions in types or why we don't need `const_evaluatable_checked`

I am not :smiley: I'm on the compiler team. I am part of the const eval working group though, which reports directly to the lang team, so I guess that's splitting hairs on my end.

I am not trying to shoot down the discussion or force my opinion to be accepted by everyone. I am indeed unhappy that there is such a strong push for duck typing style const generics, but that's beside the point, as that is just my personal feelings about it. With associated types these have not been so much of a problem yet, because people mostly use them to convey values, not computations. With const generics, as the discussion shows, people want to use them for computations.

I want us to find a good solution that satisfies everyone, but I am very conservative here (lol), because of the already mentioned problems we've had with post monomorphization errors happening unexpectedly in the past. I really do want to be able to build convenient and non-spammy APIs. I have suffered from overly verbose typenum declarations myself.

What we can do in a future edition is to default const fns to nopanic and require panic annotations to swap the rules here, but beyond that I don't think we can do a change like you are suggesting. We have the same problem with adding a new ? bound, someone's ral code out there will break.

What I would like is that we implement all the experimental nightly things with const-evaluatable-unchecked (as you can implement everything with that), and then figure out what recurring things are problematic and start collecting the various cases.

If it is just the fact that the array hack is annoying and you understandably don't want to recreate a struct Foo<const X: usize>; to use in bounds, we can start bikeshedding a syntax for such bounds sooner rather than later.

Shouldn't the min_const_generics stabilization be postponed then? (note that I write it as someone who wants stable const generics really bad) Yes, as written in the PR we already can write code on stable which causes post-monomorphization errors, e.g. like this:

trait Foo<T> {
    const FOO: usize = foo(std::mem::size_of::<T>());
}

const fn foo(n: usize) -> usize {
    let s = &[42];
    s[n]
}

struct Bar;

impl Foo<u8> for Bar {}

@lcnr has wrote:

Isn't it essentially boils down to "we already got a hole, let's make it a slightly bigger", an approach which you stand against? And to me the following design choice:

looks quite arbitrary from the language design point of view (I understand there are practical reasons for introducing such distinction, but I don't think they should be a deciding factor).

It will be much easier to write code which causes const expressions to panic using const params compared to type params. Hopefully in practice no one uses potentially panicking const expressions dependent on type params, so it will be possible to introduce deny-by-default compiler lint in a next edition, which will not allow uses of non-nopanic const expressions dependent on type params, and maybe eventually make it a hard error.

To summarize: I want for Rust to be a consistent language, so I would prefer that either we go the full duck-typing route, or we try to patch the existing holes as much as possible without widening them further and develop a more reliable system from there.

I will try to prepare a nopanic pre-RFC, but I don't think I will be able to do it sooner than the mid-January.

I think it will be a really bad solution considering that nopanic will be quite useful outside of const fns. It will be really strange for const fns to be nopanic by default with panic opt-out, while usual functions will be panic by default.

It's the other way around -- maypanic would be an effect.

Ruling out panics would not just rule out arithmetic with the default syntax, it would also rule out array/slice accesses. And it's not just panics, there are other ways in which evaluating a const fn can fail once a few more already-implemented features are stabilized -- namely, unsafe operations. Once we have union field accesses, raw ptr deref, or transmute available in const fn, panics are the least of our problems. (And note that some of these are already stable in const items, so I guess this can already affect const generics.)

Some notes:

Unless parameter1 depends on parameter2 which is some form of value dependent typing, okay some form of value dependent valuing.

If we require to constraint only single runtime parameters constantly, it wouldn't be left as it is, soon or later extension RFCs bubble up to allow for dep. typing, this is always the problem with features.

It would require a bit more than that. Nondeterminism is allowed in runtime context requiring implicit synchronization for ARC object s.t. the constraint hold for the whole block.

The other point is to transfer local type information to a function signature which I believe is type state (flow typing). I've heard that this was rejected by the rust team some time ago, don't know quite yet.

Further, many traits are written without them, adding such constraints after their definition causes some trait related implementations not to compile anymore, unless they could easily be verified and are correct.

Yes, but it will just mean that such requirements will be explicitly visible in the function signature, for example:

const DATA: [u32; 8] = [0; 8];

fn foo(i: usize) -> u32 require i < 8 { DATA[i] }

Does it mean that execution of a const fn with incorrect unsafe code in const context will cause compilation error? I don't think such operations will be in high demand when used with const generics, so it could be reasonable to initially simply forbid const fns containing unsafe code in such context. Later the system can be extended with nopanic requirement which covers a whole function, not a more specific condition.

Let's say that functions containing unsafe code can not be marked nopanic and compiler will be extended with nopanic expr keyword, which will return true if execution of the input expression completes without panics (for CTFE unsafe errors will be treated as panics) and false otherwise (a similar functionality briefly mentioned in the OP). In usual runtime this keyword probably will not be useful, so we can limit it to const contexts only. It can be used in the following fashion:

// `foo1` contains unsafe code and can not be marked `nopanic`
const fn foo1(n: usize) -> usize { .. }

// `foo2` uses `foo1`
const fn foo2(n: usize, m: usize) -> usize
require
    n < usize::MAX/2,
    nopanic foo1(m),
{
    foo(m).wrapping_add(2*n)
}

fn bar1<const N: usize>(v: [u8; N]) -> [u8; foo1(N)]
require nopanic foo1(N)
{ .. }

// the `n < usize::MAX/2` bound can be omitted
fn bar2<const N: usize>(v: [u8; N]) -> [u8; foo2(1, N)]
require nopanic foo1(N)
{ .. }
// or alternatively a coarser bound can be used
fn bar2<const N: usize>(v: [u8; N]) -> [u8; foo2(1, N)]
require nopanic foo2(1, N)
{ .. }

There is yet another way to write this constraint, as a newtype/safety invariant.

struct IsPrime(usize);

impl IsPrime {
    pub fn new(a: usize) -> Option<Self> { … }
}

fn foo(a: IsPrime) -> usize { … }

Lifting this into a const generic parameter is straightforward, once non-primitive types are allowed. It requires the initial evaluator to unwrap the IsPrime::new constructor, the actual const-fn calls can then funnel the instance through without calling the fallible constructor. This is very like the saying: Don't validate—Parse.

Only nopanic as a visible attribute of functions would then be necesssary. For instances where some intermediate nopanic function actually needs to call a fallbible constructor (knowing it won't fail) we might make use of unreachable_unchecked or some other unsafe method to transfer that assertion to the compiler. That also (accurately?) models the difference between the failure modes. Evaluating to ! in a nopanic method is unsound and unsafe would be a proper escape hatch.

fn nopanic trust_me_primes_are_nonzero(a: IsPrime) -> NonZeroUsize {
    // Safety: 0 is not a prime.
    unsafe { NonZeroUsize::new_unchecked(a.get()) }
}

That is the plan, yes.

You cannot tell from the outside if a function contains unsafe, so I do not think this will work well.