Is it finally time for f16 support?

Intel CPUs these days have f16c feature flag, which enables native float16 support. Clang has support for foat16 via LLVM already, specifically this compiles and runs on several recent Intel CPUs:

#include <stdio.h>

void foo(_Float16 *a, _Float16 *b, _Float16 *restrict c) {
  for (int i = 0; i < 8; ++i)
    c[i] = a[i] + b[i];

int main (){
   _Float16 a[8] ={0,1,2,3,4,5,6,7};

   _Float16 c [8]= {0,0,0,0,0,0,0,0};

   foo(&a, &a, &c);
   printf("%f %f %f\n", (double)c[0], (double)c[1], (double)c[2]);

Obviously, the for loop in function foo gets vectorized as follows:

        vcvtph2ps       (%rsi), %ymm0  # load a
        vcvtph2ps       (%rdi), %ymm1  # load b
        vaddps  %ymm0, %ymm1, %ymm0 # actually add
        vcvtps2ph       $4, %ymm0, (%rdx) # store into c

Would it not be wise for rust to support float16 now that LLVM actually supports it properly? Its not like there is a need for some crazy involved new syntax or anything like that.

Intel's announcement from a year ago on the subject

PS: This subject has been raised before, and in all cases it boiled down to lack of hardware support.


see RFC 3453 (tracking issue #116909) for what's already happening.


Ooops should have spent even more time googling, apologies

Obviously? I mean, I guess it's too much to ask that these processors could actually compute in f16 right now, without converting to and from f32, but I'm not sure if it's obvious that they can't.

(Edit: apparently only the Sapphire Rapids line of Xeon processors support AVX-512 FP, which adds a set of basic f16 arithmetic instructions.)