Hi. This is my first time using this forum. If this is not appropriate place to ask this kind of questions, then I am sorry (please tell me where I should ask in such case).
This is kind of followup on a topic I started on Rust Users Forum - How to enable overflow checks in const evaluation. I have noticed, that there is inconsistency between Rust enabling overflow checks during const evaluation. For example following code:
fails to compile with 2 errors (stating that overflow happened during compilation) in debug mode. But in release mode only calculating FOO panics. BAR is successfully computed and overflow is happily accepted. As far as I understand there is currently no way to enable all overflow checks in const evaluation, without also enabling them for runtime.
I feel that this is wrong and I would want to at least have an option (maybe even always forcing it) to enable those checks during const eval independently of adding overflow checks during runtime. Arguments other that "it feels good" would be:
it reduces inconsistency
I believe there is already a consensus in Rust community that overflow <=> a bug. This would allow Rust to catch more of those bugs at compile time (which is consistent with general Rust's approach of pushing invariant checking from runtime to comptime).
There is already a precedence of const evaluation having more rigorous requirements than "normal" code. For example it has stricter safety rules.
So I would like to ask how could such change to the language be proposed? Was it already discussed and dismissed? If so, could you please point me to such discussions? Do you think it would benefit the language? Is it even possible to implement (maybe this would be a breaking change)?
As an aside, this depends on your perspective. One of the intentions (though this may be relaxed now) is that const functions behave the same at runtime versus compile-time. Changing that would make it inconsistent.
(Not a refutation of your idea, which I think has merit. Just a clarification of what the intent was.)
OTOH, with the overflow check they would still behave the same at runtime as they did at compile time as long as the code compiles. If it doesn't compile there is no runtime, so no inconsistency.
The main problem we have on the impl side is that we generate MIR once for a function, borrowck it, and then split it into const eval mir (unoptimized) and codegen mir (optimized). I do not see a good way to improve this situation.
While we could just always generate the overflow checks and have a pass that removes them from codegen mir, that's likely a prohibitive compilation time regression.
Ideally I would like all of debug-assertions enabled for const, including my own custom assertions. I suppose the pain of this would be in still only codegenning once, but it seems like that could be fine:
generate mir without "knowing" the value of debug-assertions
const eval mir โ debug-assertions = true โ run assertions
optimize mir โ debug-assertions = false โ dead code eliminate the assertions
And yes that should probably generalize to any cfg, or people (I'm people!) will start using debug-assertions to help detect const eval
It certainly can't for anycfg, because inside a disabled CFG doesn't even have to compile.
But really, lang has a wishlist item for fewer cfgs anyway -- if we could move more things to trait bounds, that'd be nice. (Imagine where Platform: Windows instead of cfg(windows), say.)
If it's compiled with --const-cfg "debug-assertions" then the cfg'd code does have to compile (or fail the compilation). Sorry, I'm not suggesting that const eval should enable every cfg willy nilly. I don't know if miri has any existing notion of reified cfgs though, or if they are evaluated at a much earlier stage.
So is that kosher enough for at least an experimental RFC? It certainly clears the utility bar for me, although I can imagine the distinction eventually causing someone a lot of frustration.
I think it wouldn't be that bad because a minority of code actually gets run in const context. Also you can do a quick check on the regular MIR to see if there are any problems (missing overflow checks etc) and if there isn't, you skip generating MIR again.
You could even just annotate the MIR with metadata that says exactly where the overflow checks were elided, to generate a overflow checked const eval MIR quicker.
You always need to generate const MIR, because either you're using it in the local crate or it's part of your public API and you need to export the MIR in metadata. Lazily generating it from source is not a setup we can support, and probably don't want to support because it's more expensive and complex
Ok so what about having some kind of flag in MIR instructions that make the instruction be a no-op when not run in a const context? Perhaps that's the approach you said would be prohibitively expensive, but it's still a single MIR
Why codegen MIR can not be used for const eval? Assuming optimization correctness, they both should produce the same result. It also should help with const eval performance, which is unbearably slow for anything non-trivial (to the point I considered using build scripts to replace some involved const computations).
Overflow checks and debug asserts will be then included in debug builds, but skipped in release builds. It may be surprising that a piece of code may fail to compile in debug builds while compiling fine in release builds, but I think it will not be difficult to explain, since Rust developers already should be aware about runtime differences between debug and release builds.
One subtle impact is that more MIR will impact MIR inning thresholds (thus have chaotic perf impacts). Tbqh from the outside it feels like the MIR int math ops should lower to codegen based on the config of overflow-checks, rather than baking the checks into generated MIR, but there are many constraints that I'm unaware of.
Operations like const_eval_select need to be lowered differently in const MIR and runtime MIR, and we want to naturally do that before doing optimizations on runtime MIR.
We could also do optimizations separately on const MIR, but my expectation is that we'd lose more time by running the optimizations than we gain time from speeding up const-eval. That seems like an experiment worth doing though, if someone wants to give it a shot. We'd also detect less UB during const-eval, which is legal but could be surprising.
That's the status quo already so I don't see how this achieves what the OP is asking for?
Currently we do have a somewhat odd special case in MIR building, which is that when building MIR for a const/static initializer, we always add overflow checks; for all other MIR we only do that when -Coverflow-checks is set. I can see how that's surprising. We could fix that inconsistency by no longer having this special case, but that's the opposite of what the OP asked for -- that would mean fewer overflow checks in const, not more.
Yes that's what we'd have to do to implement the feature request: make MIR building always the same no matter whether overflow checks are enabled or not, and do this choice later in a lowering pass. Maybe a dedicated representation, like a "checked arithmetic terminator", could help alleviate the compile time effects, but it's hard to say what the cost would be without trying it.
Maybe it could be an opt-in attribute for selected heavy const computations or it could be a crate-level option provided in Cargo.toml?
Ah, true. I thought const eval always checks for overflows, probably because it results in a compilation error during development with dev profile builds.