There are several alternative ways to define the mantissa bits (with/without the implicit leading 1 and sign bit), and the only important thing is that the choice is consistent between f32 and f64, which it is.

The constant value obviously cannot be changed for compatibility reasons, so there probably isn't much point in defining other flavors, since you can obtain them by adding or subtracting 1 and the number of mantissa bits is only useful in very niche cases.

Also, the Rust values are the ones that are listed most prominently on Wikipedia, so not an unreasonable choice.

However, I think the documentation can be improved to make it clear that it's value that includes either the implicit one bit or the sign bit.

Like, it isn't wrong. 53 is certainly the number of significant digits in the floating point value. But this is such an unconventional way to define this constant.

Imagine you had a format that was always 1 × 2^{n}. Would you call it 1 bit of precision or 0 bits of precision? I think 1 is better because there is some precision there.

But it's not called PRECISION, it's called MANTISSA_DIGITS!

Alright, so. Any set of constants that includes something called MANTISSA_DIGITS would almost certainly also include an EXPONENT_DIGITS. Clearly, the word "exponent" in this name must be referring to the representation of the exponent in the binary encoding (i.e. the biased integer), so why would the word "mantissa" not also refer to the representation of the mantissa?

@ExpHP read over his finished post. It looked nice, but something was missing.

Ah! Yes! We should link EXPONENT_DIGITS to the corresponding const in rust!

.......wait. That's weird. There isn't one.

Well surely my theory must be correct about other languages. Let's try C++!

It depends which field from IEEE 754-2008 Table 3.5 you think matters. For binary formats, IEEE 754 uses a "sentinel" exponent value to indicate that the first digit of the significand is 0, otherwise it's always 1. There's a second sentinel used to indicate that this is either an infinity or a NaN, depending on the remaining bits of the number.

In this case, f64::MANTISSA_DIGITS corresponds to p (precision in bits) in the table, rather than t (trailing significand field width in bits). It's also the number of bits in the mantissa in the abstract, as opposed to the number of bits of the mantissa that get stored in the binary format, because of the encoding trick with the exponent that gives you one "hidden" bit of mantissa that's implied rather than stored. But given that the encoding trick is known to simply allow you to encode one bit of the abstract mantissa in the exponent field, getting t given that you have p is trivial.

I've never been so disturbed by a philosophy of numbers question.

It seems that the answer has to 0 because n has to be the exponent, and you learn nothing by looking at the value but the exponent, or equivalently, the concrete representation would only have bits for n; but on the other hand, every value is representable by such a value (assuming infinite n is permitted) to within one bit by such a format, so it also seems it has to be 1!

I think the issue is that in decimal scientific notation the mantissa can be in [1,10), specifically excluding [0,1). With p digits of precision in base b, this gives you b^p - 1 possible values, which most of the time can be treated as b^p, but as b^p approaches 1 you have to start distinguishing bits of information in the mantissa and digits of precision.

For example, with one digit of precision in base 10 you have only 9 possible values in the mantissa with a little over three bits of information, in base 9 you have 8 possible values with three bits of information, or (base - 1) possible values with log_2(base - 1) bits of information in general. At the limit of base 2, there's only one value, which is 0 bits of information.

By this, the conventional meaning of "digits of precision" seems like it should include the leading 1.

The interpretation that this is the size of a bitfield would be more reasonable if the constant was called MANTISSA_BITS rather than MANTISSA_DIGITS. With "digits" it's talking about the abstract properties of a number rather than its representation in memory. With "bits" it would be more ambiguous.

You could imagine a function f64::to_mantissa_exponent(self) -> Option<(i64, i16)> that includes the implicit bit in the mantissa. Option because of infinities and NaN.

BTW I think it would actually be a useful method to add (and its inverse). The exponent in the return value should be such that (a, b) represents a * 2^{b}.