I was just flicking through and noticed this post; I’ve always thought adding inbuilt f16 would give a positive vibe IMO,… as mentioned above the type is very important for graphics and now AI, with support getting more widespread. You could get ahead of the curve.
There does seem to be a bigger gap in feel for inbuilt and custom types in rust vs c++ so adding it would be nice
f8 doesn’t make sense for pixels. You lose a whole bit for a mostly-useless sign bit. OTOH it’s easy to use 8-bit lookup tables, so 8-bit values can use any custom fine-tuned gamma curve instead of whatever approximation floats happen to give.
It’s a lot easier to accept something if it has widespread native support, so we don’t have to emulate anything. It sounds like this might be the case for f16. The next step for anyone who wants f16 in the language is to produce a good write up of what platforms do and don’t support f16.
I don't know how far rust can go with autovectorization (and embedding GPU compute-kernels in the main program source), but I'd hope it's constraints means 'further than C++'; intel is heading in the direction of adding instructions that make autovectorization easier (e.g. VGATHER); and conversion to & from F16 types are supported to allow dealing with such data in memory (as for actual F16 computation, that's probably more recent. nvidia does it for compute shaders, motivated by AI performance)