pre-RFC `const` function arguments


The real intrinsic takes a const int and the ISA specifies that only the bits that are relevant for the registers involved are taken into account. That is, the higher bits of the mask are ignored, and the higher bits of the destination registers are unmodified.

what should happen if you pass 33? A compile-time error? Maybe even an error issued from the assembler?

Neither. The blendps instruction is generated with the immediate value specified by the user. Even if there was an intrinsic for which the wrong value would result in hardware undefined behavior, which AFAIK is not the case (but I don’t know all intrinsics by heart), that would just mean that there is a pre-condition on the value that the user must uphold, but that’s ok because the intrinsic is unsafe anyways, which means that the user is responsible of upholding such preconditions.

I think there will be many applications which benefit from doing the calculation at run time, with an expectation that the compiler will select the appropriate instruction, perhaps one with an immediate mask operand, perhaps not.

That’s what the portable packed-SIMD vector types are for. The ISA intrinsics map to the ISA.


But how do you go from the vector types and their variable mask arguments to a const argument? With a large match statement? This does not seem realistic.

These things simply belong in the compiler, which has ample support for instruction selection.


That’s how the portable packed-SIMD vector types are already implemented.


Let’s take “shuffle” operations as an example of something portable that compilers actually have good support for. At least LLVM does not have support for shuffles with non-constant shuffle indices (i.e. v2[i] = v1[shuf[i]] where shuf is variable). If you want to do that, you have to do it yourself – e.g., with a large match statement, or by storing the to-be-shuffled vector to memory and then doing a gather load, or other workarounds. So I contest this statement. Instruction selection is not magic.


Would this provide support for evolution of a function from using const generics to a normal parameter? e.g. Today, I design

fn foo<T, const N: usize>(t: T) // or foo<T>(t: T, const N: usize)
    let x: [i32; N] = [0; N];

Tomorrow, I see that we need more flexibility (or less bloat) and I change the signature to:

fn foo<T>(t: T, n: usize) 
    let x = vec!()[0; n];

If my callers had all used foo(t, 32) (or I was able to force this), then this only requires a change to this function.

A related question: could this be used for specialisation? For some values of T we use const generics, for others, a normal parameter?


If we proceed with the solution discussed in the pre-RFC, then yes.

If you write this code:

fn foo(const N: usize) { ...} // A
foo(3);  // B

and then change the signature of foo in A to fn foo(n: usize) { ... }, the code at point B will still compile because 3 is a valid argument for both signatures.

Changing the signature is an ABI/API breaking change anyways though.


The discussion mentions that const function arguments are just “sugar” for generics, but some things that follow from that and have not yet been mentioned (maybe they were just too obvious back then) are that const fn functions taking only const arguments:

  • cannot be called with run-time arguments (all arguments are compile-time constants),
  • cannot be refereed by a function pointer (unless we extend the language to support function pointers to generics),
  • (therefore) might not need any code generation iff we guarantee that const fn called with all const arguments are always evaluated at compile-time (which is something that might make sense doing).


I haven’t gone too deep into reviewing this thread, but has anybody explained why this kind of desugaring couldn’t be done with macros?

That is, couldn’t we make a macro foo!(1, 2, 3) expand to foo<{2}>(1, 3)? I don’t see it listed in the alternatives or mentioned anywhere in this thread…

EDIT: Ok, I see it was mentioned here. Gotta remember that search exists!


(therefore) might not need any code generation iff we guarantee that const fn called with all const arguments are always evaluated at compile-time (which is something that might make sense doing).

To be clear, we don’t have to guarantee this; the compiler is allowed to guarantee this to itself, tho.

Also, side note:

Here’s a paper that proposes to do the same thing for C++.


Personally i feel the proposed syntax is a little strange. I’d imagine some alternative syntax for turbo fish like:

fn foo1<T: Default>() -> T { T::Default() }
fn foo2<T: Default>(_: u8) -> T { T::Default() }
fn foo3<const N: usize>() -> usize { N + N }
fn foo4<const N: usize>(v: usize) -> usize { v + N }
fn foo5<'a>(s: &'a str) -> String { s.to_str() }

let f1 = foo1(where T = usize);
let f2 = foo2(42, where T = usize);
let f3 = foo3(where N = 42 + 5);
let f4 = foo4(42, where N = 42 + 5);
let f5 = foo5("foo", where 'a = 'static);

and in the original example:

const C: i32 = ...;
let r = _mm_blend_ps(a, b, where imm8 = C);
let r = _mm_blend_ps(a, b, where imm8 = 2 * C);


Personally I think it looks horrible and quite confusing for no apparent reason. I can understand pushing for pure turbofish, or macro solution (though I strongly prefer allowing const imm8: i32 in argument position), but not for this variant.


Mmm, i still mentally treat function arguments as something tuple-like. they’re ordered, with integral keys. I’d like something that keep generics out of the tuple instead of mix it inside.


I think that any proposal in this direction is orthogonal to what this issue solves, and should be phrased in the context of named function parameters, since where N and where T are very similar to that, but for kinds.

Without wanting to bikeshed this a lot, it would be nice for the syntax to be consistent with the associated types and associated constant syntax (Iterator<Item=u8>).

i still mentally treat function arguments as something tuple-like.

That breaks down when one starts allowing passing different kinds as argument, e.g., (42, const N: usize = 2) would have to be a “tuple” of an i32 object and a const :roll_eyes: I don’t know how this pre-RFC fits into that model yet, if at all. EDIT: this pre-RFC has nothing to do with this though, see: pre-RFC `const` function arguments


Yeah, that’s the main concern i have with this pre-RFC. This pre-RFC will just kill thus one-to-one mapping. So function arguments will no longer have a direct mapping to the unstable std::ops::Fn::call method, which currently takes such an tuple as its parameter…


Per this pre-RFC const function arguments desugar into generic functions with const-generic parameters, that is, the tuple being passed to std::ops::Fn::call does not contain a const in it.


Going back to the “type deduction” based idea: instead of

fn foo<const N: usize>(_n: N) { … }

which, as @rkruppe noted, creates a sort of semantic confusion:

what about having an explicit ‘lifting’ type? So you would instead write

fn foo<const N: usize>(_n: Const<usize, N>) { … }

where Const is a zero-sized type defined as

struct Const<T, const Val: T>;

The only compiler magic would be an automatic coercion (applied in the same places as existing coercions) from a constant expression of type T to the type Const<T, Val>, substituting in the actual value of the expression. So when calling foo, you wouldn’t have to explicitly construct the Const object, you would just pass the number. As another example, this (useless) code would compile:

const X: usize = 42;
let xconst: Const<usize, 42> = X;
let xconst2: Const<_, _> = X; // works with type inference

To be honest, this approach feels kind of ‘C++-ish’ to me (even though C++ doesn’t actually have an equivalent!), and I don’t know whether it’s ideal. But it would be considerably less magical than const function arguments. In particular, it would preserve the one-to-one mapping between function arguments and fields of the Fn trait tuple.


Reminds me of – which has an implicit conversion to the value of the constant.


That’s basically std::integral_constant which has an implicit conversion to its value_typ (the operator value_type member function):

auto a = std::integral_constant<int, 3>{};
int b = a;
assert(b == 3);

The lifting @comex suggests goes, however, in the opposite direction (from value_type to integral_constant):

int a = 3;
std::integral_constant b = 3;  // ERROR
static_assert(std::is_same<decltype(b), std::integral_constant<int, 3>>{});

This currently does not work in C++, but it might work in the future. C++17 added a feature called “class template argument deduction” (CTAD) that deduces class template arguments from constructor parameters. Without CTAD one has to specify class template arguments when constructing objects:

std::vector<int> a{10};

but with CTAD one does not (they are deduced from the constructor arguments), that is, the following is valid C++17:

std::vector a{10}; // T deduced to `int`

Currently, CTAD already works for non-type template parameters like int:

std::integral_constant<int, 3> a{};
std::integral_constant b{a}; // deduced to <int, 3>

but what it cannot currently do is deduce a template parameter from a function’s argument value, e.g.,

std::integral_constant c{3}; // error

It is unclear at this point whether that will ever be supported, but @ubsan mentioned on Discord that the authors of the constexpr! proposal might pursue constexpr function arguments in the future - it might be possible to extend the language such that CTAD interacts with those but I don’t know whether this is already being explored right now.


To be honest, this approach feels kind of ‘C+±ish’ to me (even though C++ doesn’t actually have an equivalent!), and I don’t know whether it’s ideal.

I get the same feeling, but it is an interesting alternative worth exploring.


I don’t imagine they’d do it with CTAD, but one could do it with just a function

template <typename T>
constexpr auto make_integral_constant(constexpr T val)
    -> std::integral_constant<T, val> {
  return {};