Pre-RFC: Dealing with broken floating point

The IEEE 754 floating point standard enjoys widespread acceptance and broad hardware support. It is so ubiquitous that Rust has (rightly) opted to define f32 and f64 as conforming to this standard. It’s a great standard, with much thought put into it and many useful guarantees. Unfortunately, the standard is more what you’d call “guidelines” than actual rules. Too many implementations take liberties with the tight precision specified by the standard, with the behavior in edge cases, with the support for subnormal numbers, with the set of supported operations, and so on.

The consequence is that even relatively simple numeric algorithms can give significantly different results on different platforms. Sometimes even basic arithmetic operations are incorrectly rounded. More commonly, subnormal numbers are treated as zero and subtleties like signed zeros and NaNs are “simplified”. Most of the time these things don’t matter, but when they do, it can be really painful.

Although the picture on modern desktop hardware is pretty okay, there are still many circumstances where one can encounter such problems in 2015. See the appendix for a non-exhaustive list of examples, but I figure most people are at least vaguely aware that this is a thing that happens (which is why it’s shoved in an appendix).

So what can we do about it? Not much. Such is the reality of writing cross-platform floating point code and Rust can only solve so many impossible problems at once. Besides, a lot of code mostly works despite these imperfections. However, we can give users the tools to work around problems if and when they face them.

The first such user would be libcore, by the way: the string to float conversion has a fast path that crucially depends on the accuracy of multiplication and division to fulfill its promise of correctly rounded results. (The result would occasionally be off by one bit, on a platform where all other operations are already occasionally inaccurate! Oh the humanity!)

The obvious solution goes through cfg, but how exactly?

Proposed solution: #[cfg(target_float="foo")]

This mirrors the existing target_* cfg values, especially target_env. I like that because it allows targeted workarounds for specific problems of specific platforms. Possible values might be: soft_float (which can mean something different depending on target arch and OS), x87, sse2, vfp, neon, etc. and possibly something about the libm used (perhaps in a second cfg value). It doesn’t have to encode every possible target platform (especially since we already have target_arch and target_os) but it should distinguish significantly different floating point implementations within one arch+OS combination.

This would offer a more canonical and more robust than everyone creating and passing their custom --cfg flags or inspecting only the arch and the OS. For example, if libcore float parsing wanted to be strictly correct, it would currently have to disable the fast path on any 32 bit x86 target, even though all in-tree targets do have SSE2 support. The compiler can more easily tell what code LLVM will generate, and it hopefully knows which support libraries (e.g. soft float, libm) will be linked in.

It could also be “abused” for disabling floating point support with a value of target_float="none". This is particular interesting for libcore, which might be used in applications that don’t have and don’t want any float support at all (not even soft float). Issue #27702 is tangentially related. Right now one can compile and use libcore without libm and just not use the remaining float-related code (e.g. parsing and formatting), but it’s a lot of redundant code being included in the binary and not necessarily being optimized out.

Alternative: #[cfg(float_is_broken)]

This is a drastic statement and, strictly speaking, this flag would have to be set on almost any target. There’s not much one can do with this information, we’d have to restrict what we mean by “broken”. For example, the obvious choice on x86 is to enable it when x87 code will be generated and disable when you have SSE2 support. Still, it’s under-specified and if the threshold for “broken” is set “incorrectly” the attribute could become useless. I don’t like this alternative very much, but it’s a possibility and has been thrown around before (e.g. in IRC discussion between me and @arielb1 and @cmr).

Thoughts?

In particular, are there other alternatives? Do you strongly prefer one of the alternatives above, and if so, why?

Appendix

  • Old x86 hardware (roughly, pre-Pentium4) doesn’t have SSE and SSE2 and thus requires using the x87 instruction set. Although this FPU can be configured to do otherwise, it will normally calculate and round for 80 bits of precision, which can occasionally result in wrong rounding compared to doing all intermediate steps with f64. Changing this is an ABI-breaking change. If we generate code using x87 instructions, we’re pretty much bound to 80 bit precision. I don’t think we can change this aspect of the ABI without adding overhead to C FFI calls.

  • Actual living breathing people are going through great pains trying (and so far, failing) to run Rust on i386 hardware.

  • Even on current hardware, x87 is still used by default by some compilers and some Linux distributions. Clang and rustc will assume SSE2 unless explicitly instructed otherwise, but Debian for example supports pre-P4 hardware. This was also mentioned in Perfecting Rust Packaging. I don’t know if an eventual Debian rustc package would change code generation to use x87, but in any case it’s conceivable that some packager would do this.

  • Moving on from x86, support for subnormals is very slow (and needs software support hooks) on some (possibly many) ARM chips. Consequently, phones based on those chips treat subnormals as zero. I found nothing about Android, but I didn’t look very hard and would be surprised if those disable flush-to-zero. Cursory googling indicates that (some) MIPS hardware has similar troubles. I also know that early (ca. 2008) CUDA hardware did the same.

  • Transcendental functions are usually implemented purely or mostly in software, and consequently their accuracy varies by vendor/library. I don’t know off hand if any blatantly miss the accuracy demanded by the standard, but it’s another source of cross-platform differences that can cause serious problems.

  • In general, LLVM will not break too much code that relies on subtle IEEE semantics unless explicit instructed to (-ffast-math), but it’s far from perfect. The GCC wiki lists some things that will not be respected in its default mode, I assume it’s similar or worse for LLVM.

6 Likes

Java has a strictfp keyword to mark classes that need strict IEEE754 math (within those classes, slower but “correct” (for some value of correct) operations will be issued by the JVM. They’ve apparently put a lot of engineering resources into this, but I don’t know many who use it.

With that out of the way, I find boolean distinctions like “is_broken” quite useless. Unless we get more information about the breakage, we cannot do anything about it anyway. And as you outline, there are many ways for IEE754 implementations to break things. It probably needs a good deal of research to even identify most of them. So my suggestion: We should look into applications and see what guarantees they require. Then we can see if implementations provide those and what we can do to rectify the problems on those that don’t.

That would be great, but I think there is not enough volunteer time and interest to do this properly, once and for all, in the Rust project. On the other hand, documenting which floating point implementation will be used is relatively little work on our end, and doesn't make many commitments, but it gives interested third parties enough information to work around most problems. It's the "80% of the results with 20% of the effort" route. Or more like 40% of the results with 5% of the effort.

To be clear, I would be delighted if Rust could solve even a small percentage of these problems. I'm just not convinced that it is a realistic goal.

We should be able to address issues with transcendental functions by just providing our own correct libm implementation, which would be consistent across platforms. Not particularly high on the priority list, but there isn't any particular reason we're using the system libm other than laziness. (IIRC, IEEE 754 says most transcendental functions are supposed to be within 1 ULP, in deference to the table-maker's dilemma.)


target_float seems sort of useless; as a user, I don't really care what instructions floating-point arithmetic uses, but rather whether I can expect correct results. I mean, sure, neon implies denormals are flushed, but whether vfp does depends on the target ABI, so that isn't really useful.

We could provide CFG flags for specifically float_80_bit and float_flush_subnormals, but except in obscure cases I'm not sure what we would actually expect the user to do about them. We can't safely allow the user to actually manipulate the floating-point environment (to, for example, turn flushing subnormals on or off) until LLVM gains support for it.

2 Likes

You make this sound trivial!

SLOCCount finds 46,055 lines in glibc/sysdeps/ieee754/ alone, estimating 11.16 Person-Years of development effort. And I'd guess proper floating point math is even harder than average development.

“providing our own correct libm implementation” doesn’t mean it has to be written in Rust… we could potentially use https://github.com/JuliaLang/openlibm.

1 Like

Which one can test by also consulting target_os and possibly other such cfg values, right?

Yeah, that particular option is not possible. But there are other things one can do with that knowledge:

  • Use different algorithms that are more robust to rounding errors, less likely to produce subnormals, etc. (which is what float parsing in libcore would do).
  • Adjust epsilons/tolerances because you noticed that your simulation sometimes explodes with EPS = 1e-10 but seems to go smoothly with EPS = 1-e9.
  • Turn off unit tests that are bound to fail (e.g. because they test behavior on subnormal inputs).
  • Other hacky workarounds.

I'm not expecting people to independently come up with principled solutions to the entirety of "denormals flush to zero", I just want to help them with applying their ad-hoc workarounds for their specific bugs. In any case, it's quite possible that feature-based flags like float_80_bit and float_flush_subnormals are more helpful than exposing ever little detail of the target environment.

1 Like

I second the use of openlibm. It seems to be the highest-quality libm alternative available under liberal licenses. In the past I have also considered the use of metalibm which can be tweaked to directly generate a Rust code, but its licensing state is unclear and I’m yet to receive a reply to my request to clarify the matter.

Perhaps we can nick a solution to a part of those problems from the Haskell folks. Herbie is a compiler plugin that converts FP calculations to numerically stable (but slower) code. I’ve already set up an issue at clippy, perhaps we can at least lint for unstable calculations.

That is not to say such calculations are erroneous, but one should think about the error bounds one is willing to accept given certain ranges of inputs, and being shown where those calculations may fail can lead to valuable insight.

I was working on a project where reproducibility of results across platforms was important. We ended up enforcing use of SSE2 and excluding the very few users with older CPUs (we never targetted non-x86 architectures anyway). For that application at least requiring SSE2 was sufficient and not too disruptive.

2 Likes

How about a library that tests at runtime all the various known failures of FP implementations. Then any code which cares about that can check the conformance-status of the CPU it’s running on, and fail early.

3 Likes

There should be a way of knowing a CPU is not bit-for-bit IEEE754 compliant, for when reproducible results are desired (in that case, the only solution is to refuse to compile, but that’s much better than subtle bit errors).

If a processor flushes denormals to zero, one must also be careful not to accidentally divide by that zero and crash the program.

Is Rust code compiled for SSE2 hardware actually bit-for-bit IEEE 754 compliant? I can't strongly assert one way or another, but I would not be surprised if there are some relatively obscure edge cases that are not covered, even ignoring things like rounding modes that aren't exposed to Rust programs anyway.

Dividing by 0.0 does not crash anything, it silently produces ±∞ (or NaN). In the default configuration, that is, but like rounding modes this cannot be changed from Rust.

See also the longest GCC bug report (AFAIK).

Well, it might be nice to have a compiler option to do FP operations in software when targeting non-compliant architectures. It might also be nice to have a standard library function to check for known deviations from IEEE 754.

But like the GCC bug report says, this problem is never going to be fixed.

This is why we need to provide our own “libm” instead of using the system’s! There’s already an RFC for using openlibm with Rust: https://github.com/rust-lang/rfcs/issues/711.

From that GCC bug:

(5) A radical solution: Find a job/hobby where computers are not used at all.

LOL

@dhardy wrote:

But like the GCC bug report says, this problem is never going to be fixed.

The problem was fixed; see comment 127, and Joseph Myers' post. (Not that I can blame you for not reading that far!)

I think it's important to note that GCC bug 323 was actually two separate problems rolled up into one very ugly ball:

  • Until SSE2 (Pentium 4, 2001), the Intel 32-bit architecture had no way to do math with IEEE double precision in registers; to get a true double result, you had to spill to memory, which was slow. If you were expecting IEEE double results, you'd be disappointed. But the C standard permitted this excess precision for temporary results, so this would have been your mistake.

  • When and whether GCC truncated values to IEEE double depended on the optimizer's decisions: it simply wasn't predictable to the programmer at all. It occurred in places not permitted by the C standard. You couldn't assume that (x/y) == (x/y) was true.

The first problem is something it makes perfect sense for Rust to address: a lot of the fast floating-point support out there is not quite IEEE; if we can tell people when frequently-problematic quirks are present so they can work around them, that'd be great.

The second problem is just flat-out unworkable behavior for a compiler.

1 Like

What’s the status of this?

I am unsure which option, if any, is good enough to move forward. Also I am still hoping someone can settle this question:

I vaguely recall learning that LLVM always ignores (and thus breaks) signalling NaNs in its optimizations. I don't have a source though, and I don't know if the behavior that's broken is technically part of IEEE 754. Furthermore, even if it's part of the standard, I don't know if anyone cares at all. sNaNs are really obscure and you can't even produce them without either deliberately constructing one through bit fiddling or by setting some FPU flags via inline assembly.

However one should file an issue for the openlibm thing. It sounds pretty uncontroversial since it's almost no extra work and reduces cross-platform differences (I think libstd has some hacks caused by MSVC's libm being weird, so those could disappear as well).