Fixed Point Arithmetic support


#1

Fixed point arithmetic is a numeric type for representing real numbers that has a fixed number of digits before and after the radix point (i.e. the decimal). It is typically implemented using integer math operations. It has a few advantages over using floating point:

  1. Hardware Support

There are still chips out there without a hardware FPU. However, I am pretty sure every practical microprocessors has hardware support for integer math.

  1. Potential Cross-Platform Consistency

Read this for advice and opinions on making floating point behave consistently across platforms. Rust even has an RFC on the subject. The gist is: here be VERY large and VERY hungry dragons who will devour you VERY quickly.

My gut tells me that integer arithmetic operations are much more consistent across platforms (if this is not the case, please tell me). If, on some architecture, 0b010 + 0b010 = 0b101, everyone would say, “AHHH!! We’re living in an Orwellian nightmare!”

  1. The Far Lands

Because of the way floating point handles precision, calculations involving numbers further than the origin may not have the precision of numbers closer to the origin. Game engines tend to solve this problem by moving the origin around, but that is an ugly hack. If a result is within the supported range of a fixed point number, it should be just as precise throughout that range.

So, why should this be in the language? Why can’t it be in a crate? AFAIK a crate cannot do this:

let pi = 3.1415_fx32;

let e: fx32 = 2.71828;


#2

Lack of support for literals has also frustrated my attempt to make a fast/imprecise/non-NaN floating point type (I wanted it for my pixel manipulations which don’t need to be as precise or reliable as academic nuclear simulators).

+1 for ability to interoperate with numeric literals.

Could it be a trait similar to operator overloading?


#3

Another use-case for custom literals would be totally-ordered floats which would implement Eq, Ord and Hash.

Could it be a trait similar to operator overloading?

Something like this would probably work:

Standard library:

// T should be one of f*, i*, or u* primitive types
trait Literal<T> {
    fn literal(in: T) -> Self;
}

Other library:

struct fx32(u32);

impl Literal<f64> for fx32 {
    fn literal(in: f64) -> Self {
        ...
    }
}

User code:

use other_crate::fx32;

0.5_fx32

#4

I understand why literals are desirable, but may I suggest a library implementation as a first step regardless? It would allow people to gain experience with it (and the performance impact) without being blocked on the (slow) process of changing the language.


#5

There are at least three fixed-point arithmetic libraries on crates.io already (fix, fpa, fp). We don’t lack library implementations.


#6

Oh, thanks. I really should have checked for myself.

Looking at download counts and reverse dependencies, none of them seem at all popular. Over their lifetime (5/7/9 months respectively), these three libraries have received less than 800 downloads put together. This really makes me question[1] whether fixed point arithmetic is really widely used/desired enough to be put into the language (or to motivate language changes such as overloadable literals). Admittedly, the aforementioned numbers are only part of the picture – being in the language would certainly greatly boost usage numbers, there are probably other projects hand-rolling their own fixed-point arithmetic, etc. – but I don’t believe that changes the conclusion much.

[1] Or, to be honest, it confirms my suspicion that fixed-point arithmetic is a pretty niche thing.


#7

The other way to look at it would be that very few people use them, because Rust has usability problems that strongly discourage use of such libraries.

I wrote a fast-float library which I’m not even using. Not because I don’t want to, but because it’s so annoying to use it.


#8

Agree with @kornel. A very common (subjectively) complaint about Rust is that f32/f64 is not Ord. The usual answer for this is to implement an f32/f64 wrapper which does not admit NaNs. There’s even a crate for this, https://crates.io/crates/noisy_float, which seems like it should be used a lot, but which is hardly used at all :frowning:

Another interesting point is that custom literal suffixes seem like a mostly front-end feature, which should not add a lot to language complexity and backward compatibility burden


#9

Yes, we all agree that counting the users of the crates undercounts how many people would use it if it was in the language. But this is not unambiguously a point in favor of putting it in the language. For one, there’s really no way to tell how large this effect is (i.e., how widely used it would be if it was in the language). It also means there’s very little hands-on experience with such a type, which would be really useful for exploring the design space before baking anything into the language.

Furthermore, note that ordered-float (which encompasses multiple solutions to the problem @matklad mentions) has 133k downloads over ~ three years. That’s an order of magnitude more downloads per unit time than all three fixed point crates put together, and a significant amount of real world experience in absolute terms. This crate has the same problem with literals, but somehow it achieved more usage than the libraries mentioned here. So I don’t think the small usage numbers of the other crates can only be attributed to the pain of using them without literals.


Being syntax-related doesn’t make language design easier, see Wadler’s law :upside_down_face:

Joking aside, this is far from a trivial feature. There’s quite a large design space even just for the implementation strategy (off the top of my head: procedural macros, const fns, run-time call to a non-const function, piggy-back on existing types as @kryptan suggested above). Then there’s questions such as:

  • how to namespace/import?
  • explicit suffixes (as in C++) and/or inference?
  • if there are explicit suffixes what is the the lexical syntax for them?
  • which kinds of literals can be overloaded (string literals? char literals? C++ has those)?
  • assuming there is inference, how does it interact with the fallback to i32/f64?
  • if string/char literals (which aren’t polymorphic so far) can be overloaded, will this cause type inference regressions?

And there is a forwards compatibility issue with future built-in literal suffixes – e.g., if 1u256 could be a custom/library-defined literal, that would restrict our ability to introduce a built-in u256 type in future Rust versions.