Constifying the std/core library


Now that conditions and loops are allowed in constant functions, would it make sense to mark as much of the standard library as possible as const? What drawbacks would that have?


A commitment to always have those API's be constant compatible which may limit implementation flexibility. So caution is probably warranted, but, with thorough evaluation and consideration, things should move to const as the need for such is determined (via light-weight proposals/PR's, MFC, and/or RFC's).


Thanks! I was thinking more about technical drawbacks, like significantly increased compile times or something, but I guess that's a valid point as well.

One thing i've noticed with const constructors (where there are already some in stable), is that ptr equality can lead to some awkward behaviors, such as the following assertions. This won't impact existing comparisons, I.e. using static or local bindings these compare as you would expect, but we're potentially introducing new weird const pointer comparisons.

use std::mem::MaybeUninit;

const C1: MaybeUninit<bool> = MaybeUninit::<bool>::uninit();
const C2: MaybeUninit<bool> = MaybeUninit::<bool>::uninit();

const CT: MaybeUninit<bool> = MaybeUninit::new(true);
const CT2: MaybeUninit<bool> = MaybeUninit::new(true);
const CF: MaybeUninit<bool> = MaybeUninit::new(false);

fn main() {
    assert!(std::ptr::eq(C1.as_ptr(), C2.as_ptr()));
    assert!(std::ptr::eq(CT.as_ptr(), CT2.as_ptr()));
    // This one is especially peculiar.
    assert!(std::ptr::eq(C1.as_ptr(), CF.as_ptr()));
1 Like

Not really, because undef (in LLVM) is "could be anything", so one possible thing it could be is false, so given that it's a Freeze thing it's no weirder than any other const promotion thing getting merged with something else (like &0 and &0 having the same address).

Well, I don't really know what to say except why then not assert! true, or true and false, it's the "in LLVM" part that bothers me mostly, as this is entirely safe code, but I haven't really found an explanation other than "In LLVM" which explains this observable equivalence.

So, I get worried when "because undef" starts being an explanation for the observable behavior of safe code. Anyhow it's definitely a corner case which i'm certain i'm in the minority of being concerned about so i'll leave it at that.


Compile times wouldn't increase because marking a function as const will not affect the way it is compiled in already working code. It only makes the function usable in new places such as when defining a new constant, but it does not affect how it is compiled otherwise.

For example

const fn foo(a: i32) -> i32 { /* ... */ }
// ...
const C: i32 = foo(1);
let x = foo(5);

Here foo can be used to define C because it is const. But the const does not affect x in any way. That is, if without const the function was evaluated at compile time as an optimization, then it will still be evaluated at compile time with const. But if without const the function was not evaluated at compile time, then it will still not be evaluated at compile time with const. To force compile time evaluation you would need something like

const X: i32 = foo(5);
let x = X;

RFC 2920 is there to provide a way to force compile time evaluation, but that would have to be explicitly forced and is not automatic.


Oh my, so I had a totally wrong assumption here about compile-time evaluation. Thanks a lot!


A blocker for some constification right now is optimized implementations. I had a quick look at constifying str::from_utf8, and it looks like all features to get it working exist now except the validation uses some pointer tricks to perform faster which are not supported in const-context yet (the first blocker I noticed was for aligning, which may be "supported" in the future since it's just a hint, but there may be other blockers hiding behind that error).

Being able to write the "naive" implementation to run in const-context and unsafely assert that this optimized implementation is equivalent and should be used at runtime would be very useful.

(There is some prior discussion of this in rust-lang/rust somewhere, but with a quick search I couldn't find it, iirc @oli-obk was suggesting a macro-like unsafe { if_const!(const_impl, runtime_impl) })


The closest thing I'm aware of is discussion of "unconst" operations: I feel like I've also seen a const-if or if-const thing somewhere, but I can't find that now either.

Is this entirely the case? Isn't it the case, at least in theory, that a const function could/would be optimized to a greater degree (more easily possibly) then a non-const function due to the fact that a const function is guaranteed to not have side-effects or rely upon extraneous state?

Currently compiler doesn't exploit that at all. You could join for related discussion.

Thanks for the link. I'll definitely read on it.

That is why I said, "at least in theory," meaning that even if the compiler currently didn't take advantage of it, there was the possibility of making the compiler exploit it for optimization purposes.

Almost everything is possible. :wink: But the fact remains that it currently doesn't happen, neither in Rust nor in C++. Any discussion about its consequences is entirely hypothetical and should be treated as such. It is entirely unclear if it would be a good idea -- const functions can still fail to evaluate (panic, loop forever), so it's not like we can just make the optimizer run them all blindly. If we have an analysis that determines which of them can be run, then we might as well use that analysis on all functions.

@ratmice that is an interesting case. Pointer equality is indeed a very tricky subject, and I am afraid the ship long has sailed where saying anything about it is "simple". There's a whole series of open questions related to it; I collected them in this issue.

The explanation for the behavior you noticed is that consts do not have a stable address, so the compiler is allowed to merge multiple constants into one as it sees fit. We don't need to mention LLVM for this explanation. I was not aware that it would even merge undef with 0; I think this is an artifact of how we compile undef to LLVM (AFAIK we will it with 0s), but it is certainly consistent with the general principle of consts not having a stable address.


Thanks, this at least sets me at some ease that the behavior is how rustc compiles undef to LLVM, and that the compiler is in control of the matter, not just some LLVM behavior which could change whimsically without considering the invariants of the trait Eq.

The rest of my concerns should be covered by note to self: don't store secrets in consts.

You can already see similar situations using Cell::new() in a const position.

use std::cell::Cell;

const C: Cell<bool> = Cell::new(false);

fn main() {
  assert_eq!(false, C.get());

Playground link

A const item will actually produce a new object every time you mention it. In the Cell's case, that means mutation gets silently thrown away. In pointers' case, that means C1.as_ptr() produces a dangling raw pointer.

1 Like

That's a more general problem of const vs static that should be solved via good documentation and compiler warnings, I don't think it's really related to const fn

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.