Pre-RFC: Dealing with broken floating point


LLVM bug 6050 “floating-point operations have side effects” is at least close. It’s about floating point exceptions rather than sNaNs, but I imagine the issues are closely related.

I’ve encountered that issue in another (non-Rust) project where code broke when compiled using clang.


So today I learned about cfg(target_feature="..."). I’m kinda surprised that neither I nor anyone else in this thread knew about it, it already exists since July 2015, although it’s unstable. This attribute already covers pretty much everything the hypothetical cfg(target_float="...") would have covered (other than libm stuff which was more of a random side thought), and has the advantage of already existing. It also has the disadvantage of being so naive as to be basically useless (, but whatever, that can be fixed.

The cfg(float_is_broken) alternative, on the other hand, seems less and less attractive every time I look at it. @wthrowe describes how LLVM will foil attempts at strict IEEE compatibility, the flag is a very coarse thing anyway, and now the problem it was intended to address already has another solution.

To me, that settles the topic I set out to address with this thread, though this thread has also generated other good ideas, like using openlibm!


Perhaps not directly related, but has anyone looked into a unum library for rust?

Unum Computing: An Energy Efficient and Massively Parallel Approach to Valid Numerics from


I have, after reading through the whole of “The End of Error” (I’d recommend it even if his plans have changed).

The problem with the unum 1.0 design is that to efficiently implement the scratchpad in software, you need a lot of type-level numerical computation, which Rust doesn’t have right now.

unum 2.0 is a significantly simplified design. However, my hunch is that they tried to do a hardware implementation, or at least considered it, and found several limitations or points of unnecessary complexity which they addressed.

While an implementation based on lookup tables may work well in hardware, it can be difficult to optimize in software. Still, they may be useful for comparing with a hardware (FPGA?) implementation, for validity and benchmarking.

I haven’t had the time to put into an actual implementation (other than some old sketches I hadn’t pushed to GitHub, before realizing the type-level complexity involved), but I’ve discussed several points on IRC (#rust-offtopic).

I also got the unum crate registered (in retrospect, prematurely).

If anyone wants to approach further development, I’d be glad to help with whatever I can.


After looking at the presentation and thinking a few hours about it:

It isn’t anything new. Basically the laws of exponentiation: exp(a * b) = exp(a) + exp(b) exp(a ^ n) = n * exp(a) The advantage: multiplication/division and exponentiation are simple. (const / linear error) The disadvange: addition and subtraction are horrible (exponential error)

Also the “Memory Wall”: You don’t read 64bits from memory. You(r cpu) read(s) at entire cache-lines. Then the cpu contains a few more units than just multiply-add. At least mine does.

A problem that “breaks unum math”: Anything with addition. Try the fibnoacci sequence forwards and backwards. you will not reach 1. not even close.

Two intersecting lines -> simple algebra -> exact solution.

Okay. enough of this vapourware. (and sorry for the massive amounts of typos… 3am)


One thing I don’t see addressed in this thread, and which could potentially cause some of the approaches suggested to founder: Which version of IEEE 754? 1985 or 2008? What about the errata Wikipedia states were being worked on in 2015? This is not an idle question - many CPU architectures (such as x86_64) define their behavior by the 1985 standard, but at very least RISC-V and AArch64 use the 2008 standard.

In addition, on the topic of unums, I suggest reading this paper arguing against them: