I totally agree that Rust should support only formats backed by an official standard or at least a solid common practice; this rules out the f8-format, for which i was unable to find anything established.
f24 at least has a common and reproducible layout (8 exponent bits, 1+15 mantissa bits).
Could you explain this a little more? As far as I know, its common practice and the primary reason for the 80-bits-format to exist on x87 FPUs (see also IEEE floating point - Minimizing the effect of accuracy problems). I donāt see a reason why a calculation with higher precision should yield worse results than directly using the lower precision.
As long as the error (typically measured in ULPs) is within the range specified by the IEEE, I donāt see any source of confusion. Or are you worried about reproducibility? Then, how is that problem different from a soft-float implementation for f32/f64 which is required on some platforms?
(Iām currently unaware of how Rust handles strict math and rounding.)
[quote=ārkruppe, post:2, topic:2367ā]However, re-implementing all the float operations in software is not just slow, it is extremely complicated once you get to transcendental functions. Basic arithmetic is not trivial either.
[/quote]
Basic arithmetic isnāt that hard. Transcendental functions are mostly unary and as such could be emulated using a simple 128 KiB-lookup table (for f16) as easiest and possibly fastest fall-back.
As far as I know OpenCL currently supports only conversion from/to f16, which might change in near future. If these were the only operations adopted by Rust, I agree that an integration as built-in type f16 wouldnāt make sense.
Itās a bit off-topic, but at least for the AVR microcontrollers a GCC-backend exists and I personally think Rust would be a great candidate to target those omnipresent devices. As much as I like assembly languages, it isnāt always the right choice, even for microcontrollers 