That isn't the issue. The issue is that is cannot represent a whole bunch of numbers in between the maximum and minimum.
Either when we find a way to replace the likes of IEEE 754
with something fundamentally better (i.e. something that doesn't suffer from representational issues nor from the extremely weird and, at least from first principles, undesirable arithmetic rules it uses. So essentially full-blown hardware-accelerated decimal calculations, if possible), or it'll be a perpetual game of "moving the goalpost" until economics pretty much permanently dictates it isn't worth it anymore (I don't believe we're at that point yet).
It's 64 bits = 8 bytes. That's nothing in 2021, except in specialized situations (e.g. deep learning where you rather need lots of the likes of an f16
, for efficiency reasons).
Which is what I'm arguing for above (except hardware accelerated), but that assumes it's possible in the first place (which reminds me, if it is possible, why hasn't it been done yet?). But note that likely none of those libraries are fully hardware accelerated, and so are not competitive in terms of performance, and that alone might make them unusable for certain use cases.
It also fundamentally broke the decimal calculation model, introducing something in its place (floats) that just can't do the job properly, thus requiring all kinds of hacks from the programmer to figure out how to get a decent approximation. For that reason alone, if at all theoretically possible, it should be replaced.
This argument has been made before here, and it's not exactly convincing. You see, it's a chicken and egg problem. If demand picks up, so will supply. If chip builders started building supply right now, there'd be demand for it in no time. It just requires somebody to take the first step and get some recognition for it.
That would be an intermediate stage, not the end of the road. Doing all that in software will likely tank performance, reinforcing the need for hardware acceleration.