Partially enabling optimizations in debug mode


I jokingly suggested to a friend adding fast { .. } which somehow makes the code in the block “faster”. He actually thought something like that could be useful, with the semantics of “always apply optimizations, even if in debug mode”.

I think some mechanism to do this could be useful… like an attribute #[hot] that tells the compiler to always run all possible optimizations on the body of a function, as if in release mode. The suggested use case my friend wanted was for a tight graphics loop in a game engine; apparently, runtime checks in debug mode stacked up to make the compiled executable too slow to effectively test. So, my friend would replace

loop {
    unsafe {
        // my graphics magic here


loop {
    unsafe {
        unsafe fn graphics_magic(..) { .. }

I have no idea how optimizations work at the MIR level though, so something like this might be difficult to do, or maybe even undesireable!

1 Like

There’s some discussion of #[optimize(speed)] in


I don’t know the details, but I could imagine applying optimisations only to part of the code could get tricky. I do see some benefit in marking parts of code as hot or likely branches, but more to give more info to the optimizer (I’d prefer an attribute over a keyword).

But, isn’t the needed solution for your friend to bump the opt level of debug profile to 1 instead of 0? That one should still compile quite fast and not damage the ability to play with it in debugger too much, but do the low-hanging fruit of optimisation and speed it up too.

Another option (not sure if possible in that case) is marking some of the slow tests as ignored and run them only on-demand on the CI server, or in some other way that doesn’t include waiting for it.

1 Like

On its face this is a good idea, however I find myself against it after spending some time to think on it.

For the trivial case like this where you know where a program slows down, it’s an attractive feature for sure, but as soon as the program rises in cyclomatic complexity and sprawl, it seems like a fast way for #hot to end up on about 50% of the functions, just bringing line-noise and confusing input for the compiler into the project.

I would worry that it will result in more confusion among Rustaceans than successful optimizations, as it’s not always clear where a compiler performs optimizations to the vast majority of programmers. Hence why we have compilers in the first place: it’s an art and a science such that compilers put manually programming in binary or assembly by the way-side.


I encountered a use case for this some time ago, with a project that contained a hash function written in Rust. Debug builds were very slow, with lots of function calls to core::num::<impl u32>::wrapping_add. I can see this causing issues with testing.

In this case, more optimization would probably increase the debugging experience, because there is really no need to single-step into wrapping_add.

1 Like
closed #6

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.