Resolving Rust's forward progress guarantees

Please leave feedback for the inside rust post discussing this here.

We are particularly interested in ways (perhaps including modifications to LLVM) which will allow us to fix the soundness hole at low-to-zero compile time cost without forcing users to use an intrinsic or similar mechanism in edge cases (which would essentially leave the soundness hole open).

With that in mind, people who are closely familiar with the inner workings of LLVM are likely to have the best insight here, though we are more broadly interested in concrete ideas for help as well.

Thank you!

8 Likes

Thanks for pushing forward on this issue! I'm always excited to see all the work people have been putting into fixing miscompilations that arise from the gap between LLVM and Rust.

My own opinion is that regressing compile times this badly to resolve this rare miscompilation would not be a good trade off. Compile times are one of our biggest pain points for users, this issue does not register for anyone. Obviously we need to resolve it eventually to make rustc conformant with theoretical spec Rust and eliminate the miscompilation, but not at the cost of a regression in compile time 3-30% and run time (based on rustc runtime) 3-7%.

11 Likes

Maybe we could have an intrinsic that inserts this hint. Something like

// in std::hint

fn make_forward_progress();

We could then simplify the hueristics to just detect only the trivial cases (loop {}, maybe simple unconditional recursion), which should cover most cases of this bug without impacting compile times too much. Then those who are still affected (which should be very few people), can use the provided intrinsic to work around LLVM.

8 Likes

What I'd be most interested in is feedback from folks who know LLVM well to get an idea if there would be some other way for us to avoid the mis-optimizations that might incur a smaller cost, both compilation and execution.

5 Likes

I do implement panic handlers and other weird things, but compile times are more important to me. I like @RustyYato's proposal of providing a sideeffect or infinite_loop intrinsic that people can call in the niche use cases where it matters.

1 Like

I agree with everything @withoutboats said, except I see somewhat more urgency in the situation (even if this has been a longstanding issue), but not enough to justify this particular change.

Does anyone have a link to the upstream discussion mentioned in the blog post? I wonder if we could adopt something in Rust's LLVM fork even if it's not ready for upstream.

1 Like

The relevant optimizations in LLVM should just be fixed.

5 Likes

As far as manual changes to affected programs go, we already have many options for such workarounds. Many of the simplest ones (e.g., inline assembly or bench_black_box) are nightly-only, but there are many stable ones too, they're just generally less obvious, less universally applicable, or have bigger performance impact (e.g., throw in some volatile stores). Stabilizing this intrinsic and having it around forever seems unappealing when its sole purpose is as a minor mitigation for a bug that will be fixed sooner or later. So I don't think that this intrinsic is warranted: it's only of much use if it's stable, and it probably shouldn't be stable.

11 Likes

Also see this GH issue tracking the problem:


That's not entirely true, I have heard from people using Rust on embedded that they ran into this. (loop {} is a surprisingly common implementation of a panic/abort/exit function on such platforms, as there is nothing else to do when you have no OS.)

Unfortunately, LLVM does not seem very eager to fix this bug in their side, despite it being a violation of the C standard, which (unlike C++) does not permit removing while (true) {}.

Also, the assumption of forward progress is deeply embedded throughout LLVM, so a fix is non-trivial -- "just fix it" does not do this justice.


Is there data for how bad the overhead of adding the "sideffect" intrinsic is if we ignore recursion and just focus on loops? Then "only" back-edges in the CFG would need the intrinsic, and loop-free functions (which is the majority of functions) would be entirely unaffected -- but the loop {} idiom would work.

7 Likes

One option to solve this use case specifically would be to provide panic handlers which are implemented as a side effecting loop. Possibly as simply as another option for the panic = abort flag, or possibly only as a function exposed in core::panic that they would need to wrap themselves.

In the long term this would be equivalent to writing loop {} but still somewhat better because it signals the user's intent better.

3 Likes

Looking at the patch that adds support for sideeffect to LLVM, it seems surprising that it would decrease compilation time as described (https://reviews.llvm.org/D38336); in fact, I'd have guessed emitting it would make the compiler faster since some optimizations are inhibited.

Is the decrease just because of the additional IR? If so, LLVM could be easily patched to behave as if every basic block had it. Otherwise, should figure out what the problem is.

1 Like

Often you still want to write the panic handler yourself in order to output the panic message somewhere for debugging, e.g. send it over the serial port or use the semihosting technique to print it to a connected GDB instance. So a separate panic = abort option that only loops would not be useful in such cases.

Personally, I agree with @RustyYato here that the best path forward is to start with special-casing loop {} and perhaps other cases that are simple to detect. This would fix the most common instances of this error and should not have a large compile-time impact. Mainly, it would fix the loop {} pattern commonly used in embedded.

7 Likes

I agree with @RustyYato's suggestion too, but I am generally concerned about "the other cases" (non trivial loop {}s that nevertheless do not make forward progress): should they remain potentially unsound :warning:?

  • enum Void {}
    let it: Void = non_trivial_but_infinite_loop();
    match it {}
    
    must remain sound.

I think that, because of LLVM and its compile-time impact, it is less bad for the ecosystem to make the "breaking change" of making the lack of forward progress a compile-time error, than to have that compile-time + runtime regression: most people here seem to agree that the latter is very bad, but since unsoundness is worse, the only thing we have left is the former.

More precisely, assuming Rust is able to "detect lack of forward progress" (if LLVM can, then so should Rust?):

  1. Emit a warning on infinite loops that do not make forward progress. The warning warns that this will become a hard error in the future, and suggests the usage of either:

    • non-unsafe intrinsic ::core::hint::make_forward_progress()

    • unsafe fn intrinsic: ::core::hint::disregard_forward_progress() which disables the compile-time error or warning without emiting forward progress to the backend (hence potentially unsound, c.f., @RalfJung's (https://github.com/rust-lang/rust/issues/28728)), for those who could find that Rust would be overly conservative w.r.t. forward progress in their code (an unacceptable performance impact + being able to guarantee that the backend being used does not trigger UB in their case).

  2. Later make the warning become a hard error, except for the aforementioned "trivial" cases: loop {} and trivial recursion, where Rust would auto-inject the make_forward_progress hint (unless the unsafe one was used), so as to make most of the code out there keep compiling by automagically making it sound. In that case it remains a warning, to make it less suprising that "refactoring the infinite loop into a more complex one" could make the code stop compiling.

2 Likes

I disagree with requiring forward progress and I've never quite understood the standpoint of C/C++/llvm on the matter. A very core property of other compiler passes is to guarantee that the sequence of program side effects is preserved¹. However, removing 'empty' loops plainly violates this by inserting additional side effects. Being able to, at any point, end the sequence of visible side effects is an incredibly useful power to have and the opposite, forward progress, somewhat arbitrarily disallows some finite sequences.. To be honest, I'm stumped how this ever ended up getting defined and established in the C++ standard and in the LLVM architecture?

¹Justifying the transformation by declaring it UB is a cop-out imho. Contrary to many other undefined behaviours it can not be dynamically observed to occur as the actual violation would, by definition, require infinite execution.

4 Likes

Neither LLVM nor Rust can detect lack of forward progress.

LLVM performs optimizations assuming forward progress. But as usual, exploiting UB for optimizations is much easier than detecting UB. The latter is, in fact, undecidable.

16 Likes

Breaking compatibility is also extremely bad. It also seems uncalled for. We currently have the situation that we cannot fix the soundness bug without significant compile time regressions. I don't see a reason to assume that this will remain so forever. Mitigating the bug as well as we can (e.g. through targeted, incomplete insertion of llvm.sideeffect) helps until a complete fix becomes practical, but changing the language definition seems excessive. Especially since, as discussed below, I think a sound analysis for this would be very difficult and have many false positives.

Moreover, I see no reason to expect that all the analysis necessary to implement this warning/error would have less compile time impact. Especially in debug mode, where the impact of -Zenable-sideeffect on perf.rust-lang.org is currently at most 8% and frequently less (vs up to 30% with optimizations).

There is a significant difference here: some parts of LLVM may sometimes exploit lack of forward progress if they happen to notice it (likely not even intentionally or consciously, e.g. considering a noreturn function to have no side effects is just an emergent property of the most obvious way to infer "has no side effects"). In contrast, to achieve soundness, rustc would have to conservatively identify every place that could potentially loop infinitely without side effects.

Being 100% precise about this is undecidable (as Ralf points out), but it is also extremely hard to usefully approximate. You have to start with the assumption that any piece of code may loop infinitely without side effects and find proof that it does not. Doing this in a function-local manner (like most static checks in current Rust) is completely hopeless since e.g. you do not have any clue which calls may potentially be part of a recursive cycle and which ones will perform side effects. But even a whole program analysis has a very hard time doing this soundly. Indirect calls and calls into FFI still can't be analyzed to rule out mutual recursion, and even without those it's often very difficult to prove (automatically) that while loops or recursion with a base case eventually terminate.

Some languages are specifically designed to make automatic proofs of termination feasible, at least for the style of program typically written in them, but Rust is very much not such a language and many Rust programs are not restricted to that style. So I expect staggering amounts of of false positives from such an analysis, so much that it could easily break significant fraction of existing Rust programs.


Finally: if I was wrong about all this and we could identify places that lack forward progress with few false positives and small compile time impact -- then we should first use that analysis to guide insertion of llvm.sideeffect() calls! The huge compile time impact was observed with a naive implementation that inserts the intrinsic in every function entry and in every loop, inserting them only where needed should help a lot.

9 Likes

Ok, thanks for the clarifications: I wanted to explore what an attempt at avoiding the performance impact would look like, but it turns out that not only is it not feasible, it would also incur in a higher performance impact!

Still, if my initial dichotomy holds (neither answer has contradicted it), it means that there isn't really a choice here: the only remaining option is to "tank" the performance regression by having the llvm.sideeffect() insertions, and follow up with incremental improvements by getting smarter about identifying places where the insertion can be elided.

I don't really follow your logic here. sideeffect emission is not binary (mitigations like targeting loop {} specifically are useful), there are other possible ways to reduce compile time impact (changes to LLVM), and we do not have to start emitting sideeffect by default before making those improvements.

2 Likes

I just mean that, conceptually, we should try to reach for soundness asap, and iterate from there (and the only actual / concrete sound solution brought to the table right now is emitting sideeffect calls).

Doing so, in practice, can obviously, as long as Rust code remains sound, be "squashed" into never emitting sideeffect, if someone comes up with an alternative solution in the meantime.

I guess this being so obvious is the reason I have been misunderstood (I might have given too much emphasis to the how rather than the what), but if you look at the first half of this thread, there are several posts saying "I'd rather have this niche miscompilation case not be fixed yet than a performance regression" (maybe because it was before Ralf's actual unsoundness example, so we didn't know how "bad" the miscompilation was).

But now that we do know it can lead to unsoundness within non-unsafe code, fixing it should be more important than avoiding the performance regression. Obviously, if we can do both, that's even better, but I just wanted to express how the priorities should, imho, be laid out. (In hindsight I should have started by explicitely saying all this, I apologize for the miscommunication).

3 Likes

Eh. As ugly as it is, there are a lot of open soundness bugs, some of which have been open for a long time. The forward progress one has been known for 4.5 years (that's when the issue was filed); there's another big one of similar nature (LLVM exploiting C undefined behavior even though it shouldn't be undefined in Rust) that's been known since 2013, and has had a fix held up over similarly-sized performance regressions. I don't want to be defeatist: we may be far from the goal of 100% soundness, but the only way to get there is one bug at a time. But I do think the number of other bugs makes it less obviously correct to add a hacky workaround with a hefty cost, as opposed to finding a proper fix. On the other hand, it wouldn't be good if the desire to find a proper fix resulted in another 4.5 years of inaction... :slight_smile:

18 Likes