How to handle constants in the compiler

A question from @oli-obk got me to thinking that it is time to have some discussion about how to best handle constants. I wanted to get this started by laying out some ideas.

The state today

Today’s constant infrastructure seems to me to be fraught with problems. Among them:

  1. We have two constant evaluators, the one from typeck and the one from trans.
  2. The trans evaluator is more complete, but is based on LLVM, which is sort of non-portable and also just opaque.
  3. Many places that demand constants, like array types, actually bake in a usize, under the assumption that constants can be reduced at type-checking time to an integer. This is not compatible with associated constants, where we might have to treat a constant a bit more abstractly.
  4. const fn is, afaik, only supported in one evaluator.

At a high-level, I’d like to see us:

  1. Have a single constant evaluator that is independent of LLVM.
  2. Have a notion of constant equality that doesn’t require reducing to an actual value. This seems very, very similar to associated type equality and normalization – the primary difference is that we could choose to bake in some understanding of mathematical identities.
  3. Permit arbitrary constants in array lengths; if we want to know whether two types are equal, we would base that off of the equality relation described in the previous bullet.
  4. Support const fn uniformly and smoothly.

Two forks in the road

So as part of the MIR branch I’ve thought this over some. I primarily considered two big approaches:

  1. MIR-FTW: In this version, a constant value is represented as a MIR fn that would do the computation, along with a “tail expression”.
  2. AST-FTW: in this version, a constant value is represented as a kind of simplified AST. This probably winds up feeling something like a little calculus, since we can do various reductions to simplify the AST:
  • Actual math (of course): 2+2 could be replaced with 3. Er, 4.
  • Fn calls: foo(2, 3) could be replaced with the definition of foo.
  • Projections: i32::SIZE might be replaced with 4 (or other associated constants).
  • Identities: we might do something like changing X+0 to X, normalizing the overall form, etc. This is needed only to help the equality relation be a bit more comprehensive. Honestly I am reluctant to go very far in this direction, since it seems like a rathole, but perhaps handling a few cases would be helpful.

The AST-FTW approach feels a bit more natural to me, if a bit less DRY (after all, the rvalue type and constant AST have some amount of overlap). Therefore, that is what I wound up doing. The MIR branch today includes a type Constant<H> which defines the mini-AST (below). As you can see, it’s really not that big. If we wanted, we could probably consolidate this with Rvalue<H> by making one type RvalueTree<H,L> that is parameterized by a type L representing the arguments. Hence today’s Rvalue<H> would be RvalueTree<H,Lvalue> and Constant<H> would be RvalueTree<H,Constant<H>>. But I’m not sure if it’s important.

pub enum ConstantKind<H:HIR> {
    Literal(Literal<H>),
    Aggregate(AggregateKind<H>, Vec<Constant<H>>),
    Call(Box<Constant<H>>, Vec<Constant<H>>),
    Cast(Box<Constant<H>>, H::Ty),
    Repeat(Box<Constant<H>>, Box<Constant<H>>),
    Ref(BorrowKind, Box<Constant<H>>),
    BinaryOp(BinOp, Box<Constant<H>>, Box<Constant<H>>),
    UnaryOp(UnOp, Box<Constant<H>>),
    Projection(Box<ConstantProjection<H>>)
}
                                    
pub type ConstantProjection<H> = Projection<H,Constant<H>>;
                                        
pub enum Literal<H:HIR> {
    Item { def_id: H::DefId, substs: H::Substs },
    Projection { projection: H::Projection },
    Int { bits: IntegralBits, value: i64 },
    Uint { bits: IntegralBits, value: u64 },
    Float { bits: FloatBits, value: f64 },
    Char { c: char },
    Bool { value: bool },
    Bytes { value: H::Bytes },
    String { value: H::InternedString },
}

pub enum IntegralBits {
    B8, B16, B32, B64, BSize
}

pub enum FloatBits {
    F32, F64
}

When constants are converted

If we have the option of embedding references to constants into types and not eagerly evaluating them, then we can easily translate types in the collect phase without forcing constant evaluation to occur immediately. We then defer the question of whether two types are equal to the type when we are comparing them. I see this as very similar to the lazy normalization that I described in an earlier discuss thread (and which @jroesch has been working on making a reality). Basically when we find two constants that must be equal, we can try to normalize them then – this may result in an actual integer or float, but it may also yield something like Foo::SIZE (which presumably can’t be evaluated until trans). We can clearly still conclude that Foo::SIZE is equal to Foo::SIZE, and we may be able to integer where clauses (eventually) to conclude that X::Y and Foo::BAR are equal as well.

Where constants are used

I envision that just as every fn is lowered to a MIR representation in terms of basic blocks and so forth, every const would be lowered to a MIR representation in terms of Constant. When we do constant evaluation, that is basically applying the reductions described earlier. Constants that reference one another would be initially encoded just as a Item reference, and could be inlined as part of reduction (we’ll have to detect cycles, obviously).

Constant propoagation on the MIR

I imagine we will implementation constant propagation as a pass that walks over the MIR and tracks which values have known values. (Naturally it will have to be careful about overflow.) This can make the code faster of course (or at least faster to compile) but it can also, in some cases effect the type system. This is because taking some constants (like []) can be promoted to static memory and hence have their lifetime extended beyond the usual temporary lifetime bounds. Right now we only do this for [], but we’ve talked about extending it to other cases, like closures with no free variables. (Overflow introduces an interesting wrinkle, since something like 2+2 could be eligible, but INT_MAX+1 presumably not? But that’s a problem for the RFC to tackle.)

The fate of check_const

Today we have this pass check_const that walks the AST and identifies what can be constant and note. I see this going away. It lives on, to some extent, as the constant propagation pass I just described, and of course also as a part of the MIR lowering process, which will check that constant expressions fall into the correct subset.

1 Like

What is HIR doing in the AST constant evaluation? Also, why are there both projection literals and a projection variant in ConstantKind? I would prefer constant evaluation to be a part of ty, conceptually very similar to Ty/TypeVariants, except that naturally almost all constructors will be fictitious constants.

Delayed constant evaluation doesn’t have much to do with lazy normalization - types in impls are stored unnormalized just fine.

Also, something has to be done about the type-checking status of constants. MIR isn’t particularly interesting when dealing with constants - it is primarily a way of reasoning about control-flow, which constants don’t really have.

A Panic literal would be nice too. As I imagine it, it would lower to a panic, while causing a compile-time error when it is used in a typeck constant context (i.e. array length or enum discriminant).

I was very confused by your whole approach until I realized that it’s looking forward to associated/generic constants (so in general constants can’t actually be evaluated until after monomorphization). With rust’s current model for constants, everything is much simpler: you only need to be able to represent values, not operations.

Type-checking needs access to a constant evaluator for a variety of reasons. The representation of a constant in type-checking and MIR will be essentially identical. Therefore, I would suggest that constants/constant-evaluation really shouldn’t be a part of MIR itself, but rather an independent utility library which can be shared.

There are many complications involved in generalized constants, but the complicated bits have to do with type-checking, not MIR.

Minor detail: I think you need a Deref ConstantKInd.

So to summarize your two approaches:

  1. Use MIR to “describe” the constant. This MIR could conceivably be interpreted at compile time using a MIR interpreter, if one were to be written. This would enable all sorts of cool stuff, but be prone to halting problems unless special care is taken.
  2. Use an AST-like structure to evaluate constants – having recently written something similar for rust-clippy (which Manish proposed to merge into rustc_middle), I feel this is the more robust, if a little less capable solution. I’m a bit puzzled on why you’d want to encode a Call(..) as a constant, however – is there a way for such a call to be “interpreted” at compile time and would that even make sense?
1 Like

Good question. I should have added that this "AST" I gave can be taken with a grain of salt, it can probably be minimized (and/or has some duplication, as @arielb1 pointed out, I'll have to go look and remember what's going on there). So yeah, we may want to eagerly evaluate calls -- but if we get support for ifs/branching in const fns, then we would want the option to defer calls, so as to prevent infinite loops. Not sure if we want it otherwise.

Panic may be useful, I'm not sure. Specifically, I do not think we want to insert change the results of code generation based on constant evaluation optimizations (as discussed in turn statically known erroneous code into a warning and continue normal code-generation by oli-obk · Pull Request #1229 · rust-lang/rfcs · GitHub), so I'm not sure when it would make sense to generate a Panic result (perhaps when overflow checking is enabled?)

In my constant implementation, the ifs branching is fairly trivial: I just use the then or else path (still from the AST) depending on the (constant) value of the condition before trying to evaluate it. As for infinite loops, I think allowing for loops to iterate freely over constant values and allow while loops with a fixed number of iterations, reverting to runtime when the loop hasn’t finished after that number of iterations, would be the way to go. Obviously we’d need a checked recursion limit, too.

Regarding panic / !, as it is a possible not-quite-value of any type, we may want to generate it in constant code, too. Having this information at compile time seems rather useful for lints or other parts of the compiler.

This is basically what I am saying, I think.

I don't quite follow, can you elaborate? They seem quite related to me, in mechanism if nothing else. Both are basically matching against an impl and selecting something defined within: in one case, it's a type, and in one case, it's a value. But the operations are very similar.

Yes, I'd be (potentially) amenable to moving the definition of a constant around to other places, perhaps ty. I think from the POV of MIR it'd be fine to treat constants opaquely, part of the higher-level IR, basically.

Panic may be useful, I'm not sure. Specifically, I do not think we want to insert change the results of code generation based on constant evaluation optimizations.

Our integer overflow strategy explicitly allows us to replace overflowing expressions with a run-time panics - the reason #1229 is problematic is because we can fail to compile code that is not actually executed in practice, though it may be better to not actually insert panics to unchecked builds.

I don't quite follow, can you elaborate? They seem quite related to me, in mechanism if nothing else. Both are basically matching against an impl and selecting something defined within: in one case, it's a type, and in one case, it's a value. But the operations are very similar.

Maybe I am missing something, but I don't see any dependency between lazy normalization and type-level constants: when you want an actual concrete (head) value, the equivalent of structurally_resolved_type, you have to normalize to it no matter what. When you are just unifying, you can just eagerly normalize (possibly with the "replace yet-unresolvable fictitious type with variable and predicate" trick, but that is only needed when we allow mixing infers and constants).

I would like to take the opportunity to ask: how far is compile-time evaluation envisioned to go in Rust?

In Scala, I believe you can get pretty much everything evaluated at compile-time (even with I/O) while in constrast C++03 was quite limited (mostly maths) and C++14 is slightly more with constexpr but not sufficiently for “regular” std::algorithm to be declared constexpr (as the implementation is too restraint). A notable limitation in C++ is that allocating memory is not constexpr which immediately rules out a lot…

And of course, there is the difficulty that being a systems language, Rust has to allow memory twiddling, which poses some challenges for compile-time evaluation (especially coupled with cross-compilation).

I think there is an opportunity for Rust to jump ahead of C++ here, and allow more than just integrals in generic parameters as well as more functions to be evaluated at compile-time.

Am I the only one wishing for it?

Syntax extensions aka compiler plugins can execute arbitrary code during compile-time.

const fn CTFE is supposed to eventually have most of the features of stable Rust. It is only supposed to do pure actions - file IO, as well as converting memory addresses <-> numbers, are forbidden. I think we should make it dynamically-typed and allow calling arbitrary functions, causing a compile-time error if they misbehave, and even allow allocators in a garbage-collected fashion. @nikomatsakis is afraid this is likely to cause backwards-compatibility issues when a library changing its constness breaks user code.

1 Like

I certainly understand @nikomatsakis’ concern, backward compatibility is tough enough to guarantee when you can “only” inspect function signatures! As a result I would tend to say that having to mark const functions explicitly and having the compiler check that they only call const functions (recursively) could be a huge help…

Although the issue with this I guess is with traits (such as Add): marking the trait methods const impose that implementations be const which breaks backward compatibility when introducing the feature. A work-around might be to allow const methods to implement non-const ones (so the trait definition guarantees nothing, but a particular impl might).

Regarding the “no memory address <-> number” conversion: I suppose this also means no stashing bits in “unused” parts of pointers, etc… couldn’t this potentially prevent a number of useful data-structures to be used?

I was wondering if one could not leverage LLVM IR here: since translating to LLVM is anyway the goal of compilation, let’s translate some items early (just in time translation, to be reused at codegen time), and then build some emulator on top of LLVM (complete with memory simulation) which behaves exactly as specified in the LLVM data layout (endianness, pointer width, alignment, …) so it works in cross-compilation cases (even with just numbers, usize and isize width has to be emulated). Then translating pointer to numbers (and vice versa), stashing bits, memsetting structs to all 0 or fiddling with the innards of floating point numbers all fall out naturally without any corner case.

It would be necessary to bootstrap the process by providing a definition of a couple key C functions (malloc, realloc, free, memcpy, memset at least), which can be marked const to make type-checking const fn regular (ie, only depend on other const fn).

Further, the emulator could be shared with other languages (such as C++), which would help with the inevitable maintenance burden as LLVM evolves.

On the other hand, if used to power the computations used for non-type generic parameters, it requires LLVM for type-checking which might be undesirable; but since LLVM is anyway required to compile at some stage…

@arielb1 said it, but I'll repeat it: I think we should strongly consider whether syntax extensions can fill this gap. That would allow you to do literally anything you want without any additional complications. Basically you could write arbitrary code, throw it in a helper crate, and then have your crate call it. If creating a distinct crate is too onerous, we can likely use some kind of staging techniques as well.

Specifically marking functions as const has the large downside that anyone forgetting to mark any one function as const (when it could be const) can break your use case. Ensuring proper const discipline in the standard library is one thing, but if a third party library is not careful, that can be a serious roadblock. Especially since, unlike with traits, you can’t make a newtype shim that fixes the problem by implementing the relevant traits.

An escape hatch along the lines of “I know this isn’t const but trust me you can evaluate it” has almost the same compatibility hazards as not enforcing anything at all. Just because it’s opt-in doesn’t mean people will use it wisely.

An LLVM interpreter is a pretty big project. There’s an interpreter in LLVM (can be invoked via lli -force-interpreter=true), but it only executes programs for its host architecture, it’s not an emulator, and it’s not particularly popular or complete. I have doubts whether other communities will have enough use for an “LLVM emulator” to help with maintenance (it would compete with fully-featured emulators like QEMU for starters). Moreover, most bugs in such an emulator that don’t outright crash the compiler will result in silently wrong constants. Have fun debugging that!

Also I second the reference to syntax extensions (and cargo build scripts, to a lesser extent). Any additional language support is simply for making common use cases easier to express. Something that involves dynamic memory allocation and pointer-mangling data structures might be “not worth it” to evaluate in the compiler proper.

the problem is that ASLR is used at run-time, so the integer value of addresses is not known then. We could possibly add some ability to use flags on raw pointers via intrinsics, if there is enough demand.

Yes, I am very nervous about this sort of split.

Rust already solved this for variables by requiring mut in a binding to make it non-const. But it would obviously be a breaking change to require an annotation for functions that are not const evaluable.

We could have a lint that does it, though. const function calling non-const function.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.