Allow different floating point rounding modes outside of inline assembly

I've ran into the issue where I need to persistently change the floating-point rounding mode by modifying the MXCSR register on x86. The issue I run into is that Rust describes this as UB with the following paragraph:

Note that modifying the masking flags, rounding mode, or denormals-are-zero mode flags leads to
immediate Undefined Behavior: Rust assumes that these are always in their default state and 
will optimize accordingly. This even applies when the register is altered and later reset to its
original value without any floating-point operations appearing in the source code between those
operations (since floating-point operations appearing earlier or later can be reordered).

This forces me to, whenever I wish to change the floating point flags, do so repeatedly (potentially millions of time a second) when really it could have been done once during the whole life of my application. I find this very restrictive as a rule set by the language. An alternative to somehow removing the assumption these flags are always their default values, could be to introduce built-in functions to perform operations with a certain rounding (that are also fast to perform) unrelated to the rounding mode present in the system register, although I guess this approach is not particularly friendly to make portable or efficient. Any ideas or potential solutions for this issue would be greatly appreciated.

1 Like

I’m not really a language developer, but the problem with providing methods to set the global flags is that it necessarily has a global effect, so any third-party code that does floating-point operations may also be relying on the default flag settings to perform correctly.

Probably, the best way forward would be to define an alternative floating-point primitive type that has the semantics you need, and then somehow make the optimizer aware the of the flag register changes this requires so that it has the opportunity to elide some or all of them.


LLVM optimizes floating point operations under the assumption that the default floating point environment is used (eg using the default rounding mode when const propagating). Also more importantly the compiler is allowed to move floating point operations to before the point where you changed the floating point environment as floating point operations are considered pure.

See also Is Changing the Floating-point environment for intrinsic/assembly code UB? · Issue #471 · rust-lang/unsafe-code-guidelines · GitHub