Hi, lately I am not showing myself much. Currently const-fn functions are limited in what they can contain, they can't even contain a for loop. But eventually some of those limitations will be removed. So do you see some advantage of annotating functions with "const" even if you never plan to execute them at compile-time? Is this going to allow some extra LLVM optimizations? Is this going to help programmers reduce the cognitive burden caused by functions (according to a variant of the Rule of least power https://en.wikipedia.org/wiki/Rule_of_least_power that says that in programming you should prefer less powerful functionality, like to use a immutable variable where you don't need to mutate it, etc)?
Generally I'm pro-const but I think it is also good to ask the opposite question. If a function is possible to implement as const, perhaps there is a better imperative algorithm for the same function.
One might still prefer the const variation if calling the function multiple times, in a way which can eliminate the common expressions. Is the imperative version n times better where n is the number of calls that could be eliminated through memoization?
It's extremely unclear to me what's being asked here.
A lot of the OP post sounds like you're wondering why we want const fn
to exist at all, given we're going to lift so many restrictions they'll end up fairly similar to regular fn
s. The answer to that is of course that there's a bunch of restrictions we can never lift no matter what, because compile-time and run-time are just different things.
The stuff about cognitive burden seems like a probable "no" but I can only grasp at straws for how to interpret it. As long as we have to teach everyone about regular fn
s (which AFAIK nobody thinks is going to change), const fn
s obviously can't make the language easier to learn. They probably don't even represent an interestingly easier to teach subset of the language, given we plan to enable most of the core language constructs, and it's been made clear that const
will not imply purity or referential transparency or anything like that. AFAIK the major differences between const fn
and regular fn
are largely going to be in relatively advanced or specialized features like inline assembly being (obviously) impossible to run at compile-time, or (maybe someday) invoking const fn
s as part of const generics constraints, or the whole concept of "const safety" and its unconst
escape hatch.
I find it unfortunate that const
is being used for two somewhat-different (though related) concepts: const as in "unchanging" in declarations, and
const` as in "compile-time computation". For the latter I think the Zig comptime attribute would have been a better annotation.
Empirically the answer to this is no; LLVM will do constant propogation and similar optimizations whether the function is marked const
on the Rust side or not. (I don't think const
even gets passed to LLVM in any manner!)
Annotating functions as const
might allow rustc to do constant folding at the MIR level in the future, but it has no impact once we've gotten to LLVM.
There are actually 4 contexts where the keyword const
is used:
const fn
- Compile-time constants
- Const generics
-
*const
pointer; this is the only place that has nothing to do with compile-time evaluation
Why not const fn
for pure functions and const(impure) fn
for impure ones?
FWIW: the first two of those are what C++ spells constexpr
(constinit
/consteval
notwithstanding...), so there's a bit of familiarity there. C++ doesn't denote non-type template parameters with const because, well, they write the equivalent of <T: type, N: usize>
.
And, of course, C++ calls const pointers const T*
, for extra etymological confusion. Yay!
Because that's a backwards incompatible change, not to mention orthogonal to this topic. What would regular old functions even mean then?
You shouldn't think of const fn
as "functions that do compile-time cmputation", but "functions that are usable in a const-context." There's no requirement that a const fn
be called at compile-time, only that it may be if you're using it to compute a const
value, which may be done at compile time in the service of optimization. Is there anything in the language semantics that actually requires const
items to be computed at compile time?
Well, eventually you'll be able to use const
values for things like array sizes and const generics where it clearly has to be compile-time no matter what.
For constant items in general, i.e. const FOO: u32 = 1;
, the wording in the Reference isn't quite guaranteeing this stuff is evaluated at compile-time, but it's so close it might as well be:
A constant item is a named constant value which is not associated with a specific memory location in the program. Constants are essentially inlined wherever they are used, meaning that they are copied directly into the relevant context when used. References to the same constant are not necessarily guaranteed to refer to the same memory address.
More long-term, I would imagine that any compilation error that occurs while evaluating a const fn
is one that we're going to want to guarantee really is a compilation error, not a runtime error, which probably effectively requires running as much const
stuff as possible at compile-time. Though I'm not sure that's explicitly stated anywhere.
Your link is for the reference in the 1.26.1 Rust version.
If you look at the most recent definition of constant expression you can see a more definitive statement:
Certain forms of expressions, called constant expressions, can be evaluated at compile time. In const contexts, these are the only allowed expressions, and are always evaluated at compile time.
Const contexts is defined as:
A const context is one of the following:
- Array type length expressions
- Repeat expression length expressions
- The initializer of
No, definitely not. const fn
(CTFE, compile-time function evaluation) is entirely independent from "constant propagation", which is a MIR and LLVM optimizations to try and evaluate code at compile-time opportunistically with the goal of avoiding run-time work.
I don't see any reason why MIR-level const propagation/folding should be affected by const fn
. Right now, we already do MIR-level const propagation and const
has no effect on it.
I'm guessing the idea is that somewhere in the optimization logic there has to be a "can we const-propagate this?" and then also "is it even worth trying to figure out if we can const-propagate this?", such that there may be cases where it comes out "not worth analyzing" today but a const
tomorrow makes it trivially "oh we already know we can".
Or is rustc already doing "thorough" const prop analysis and there's no heuristic like this going on?
Constant propagation (so far, anyway) happens inside a function. To happen across functions, the callee has to be inlined first. So, right now I do no see any way in which const fn
could factor into the decision here.
const fn
could in principle affect the inlining heuristic, but I do not think this would be a good idea. const fn
can be big, and expensive, and if their arguments are only known at run-time then there is also no guarantee that they actually can be const-propagated.
I don't really understand that, can we see it with a detailed example?
Let's imagine a candidate where the properties of a function f: (i32, i32) -> i32
would determine whether constant propagation could be applied to something like f(2, 3)
:
fn const_propagate () -> i32
{
f(2, 3)
}
You mention inlining, and the thread is about const fn
, so let's compare:
#[inline(never)]
const
fn f (x: i32, y: i32) -> i32 { ... }
vs.
#[inline(always)]
fn f (x: i32, y: i32) -> i32 { x * y }
The way I would imagine it works, is that both cases get const
-propagated: the latter because it can be inlined so even LLVM would be able to optimize that, and the former because the function is const
, so one could have written const_propagate
as:
fn const_propagated () -> i32
{
const RET: i32 = f(2, 3);
RET
}
So I guess the question would be: in the first case, where a function's implementation is const
-eligible but "not inlineable", if their author fails to mark the function as const
, couldn't that hinder constant propagation?
Const propagation does not "look through" function definitions. So it will not do anything on f(2, 3)
, no matter the f
.
However, once f
got inlined (e.g. by the MIR inliner), then it can propagate the computations that "surfaced" that way.
Sure, we could consider making const-prop "look through" function calls, but that would be essentially duplicating an inliner, so we'd rather let the inliner do that work.
The idea behind my "might" is that constant propogation could see a call to a const fn
with constant arguments, say "hey I can call that," and do so, extending constant propogation to "see through" uninlined const fn
without having to inline all of them.
I understand there are potential reasons to not do that, thus my "might".
The risk with that plan is that const-eval might actually fail even for const fn
, once internally somewhere they are using unsafe
(or "unconst") operations.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.