Problems with floats

I think there are currently two problems with floats:

  • they are required in libcore, so targets that either can't support them (e.g. the backend does not support them at all and errors on use) or don't want to use them (e.g. a kernel module), end up having a not so great experience

  • part of the methods of these types are implemented in libcore (e.g. f32::add) while part of the methods are implemented in libstd (e.g. f32::sin). This isn't only 100% magic, but it also means that #![no_std] crates cannot use math functions like f32::sin even when they are actually linked to the binary, because these methods are not available (we have a pure rust libm in rust-lang/libm).

Does someone else think that this is a problem worth solving ? And if so, has anyone already invested time thinking about solutions ?

Since we can't break any code, we could start by making f32/f64 optional in libcore, e.g., behind a #[cfg(target_has_float = "f32")] cfg feature, that would need to be defined to true for all targets that do actually support floats (most targets). This would allow a kernel module to, e.g., fabricate a slightly different target specification that disables floats, while for embedded targets that can't support them, these just wouldn't be there at all.

Then I'd suggest moving the math functions to a library on top of libcore, e.g., libfloat, which lives in parallel to liballoc in the hierarchy. This crate would be in charge of linking libm and implementing the float methods (still 100% magically because the types are defined in libcore..). This will allow #[no_std] users to pull libfloat just like how they pull liballoc today.

Ideally, libfloat would, at some point, be configurable, allowing users not only to link the system's math library, but to override it with another math library if they want to (e.g. if you wanted to use sleef as your math library, you could rebuild your target by enabling some libfloat cargo feature). I've wanted to use a different math library a couple of times already, and depending on how your system libm looks like, it might go from just a simple LD_PRELOAD to having to fully replace the whole libc.

If we ever add portable simd support for libcore, we'd need to add support for libmvec or similar as well to support, e.g., f32x4::sin and friends. I don't know what would be the best way to do that. I think that ideally, libcore just would not have any floats at all, and if you want to use f32 you'd need to use libfloat, but I guess its too late for that. For packed vectors, we could just provide the float vectors in a libfloatvec crate, so that they are opt-in from the start.


I do think this is a problem worth solving. I care about targets where you can't use floating-point unless you initialize it yourself (such as in firmware).

Short-term, splitting out "libfloat" might make sense. Long-term, I hope we can handle this and all other divisions via feature flags in a unified std library.

Long-term, I hope we can handle this and all other divisions via feature flags in a unified std library.

I don't see any issues with this. A unified standard library is going to have to deal with a lot of optional types, and with conditionally linking other libraries depending on target or features. Floats would be just another thing to handle there correctly.

I feel that we should handle floats the same way we handled i128: they should always be available, but fall back to soft-float for targets that don't support floating-point operations. There should also be some mechanism to force a target to use soft-float for environments such as kernels (currently this can be done with a custom target json).

Regarding float methods in libstd, I think these should all be moved to libcore (tracking issue) and libm will just become an implicit dependency of libcore the same way compiler-builtins is.


There are targets that sometimes support floating-point operations, and sometimes don't, depending on whether you've initialized them (and whether you can use them without saving/restoring their state). For instance, you can use floating point in the Linux kernel, if and only if you run inside a kernel_fpu_begin / kernel_fpu_end block.

I think it makes sense to have a way to make sure you can't use floating-point at all on a given target, and then optionally introduce it into certain areas without risking it showing up in other areas.

And we do need to make sure we don't end up with gratuitous usage of floating-point in the standard library, except when implementing floating-point operations.

There is already a clippy lint which warns against any use of floating-point code, which is specifically designed for kernels & embedded systems.

However supporting targets that only sometimes support floating-point is harder than it seems. In the case of the Linux kernel, the only reason this even works is because the kernel is compiled with -O2 instead of -O3, which inhibits compiler auto-vectorization. If this was not done then the compiler could generate SSE instructions for plain integer code which would corrupt the SSE register state.

1 Like

I was pretty unhappy with libm in my embedded projects, especially for trig functions: they're slow and add considerably to the binary size. All of my needs had the diametrical opposite concerns: I was happy to sacrifice precision for better performance and smaller code size. In writing an accelerometer library (in addition to an embedded DSP) I ended up writing an entire minimal-code-size approximation-based vector geometry and statistics library, which I eventually extracted out into its own crate:

At OxidizeConf I heard some similar complaints about libm being too slow for building embedded gaming, except the solution I heard was people using libm to dynamically construct approximation tables at runtime for performance! While I can see some embedded use cases benefitting from the precision libm offers, I think many other times it's the wrong fit and an approximation-based solution is more appropriate.


I'd expect such targets to disable floating-point target features (e.g. by +soft-float, -fp, ...) to make the code-generation backend error if a floating-point register would be used. The problem is that when you do that, if some code using floats ends up in the backend, you'll get an LLVM assertion at best or a segfault.

Those clippy lints warn you about when your own code uses floats, but you also want such a warning for all your dependencies down to libcore and its dependencies. And if you get a warning in libcore, then what ? You'd need to fork libcore to fix the issue and use your fork instead. It would be simpler if you could just set a flag in your target specification flag to disable libcore floats and have that guaranteed to work.


I was pretty unhappy with libm in my embedded projects, especially for trig functions: they're slow and add considerably to the binary size. All of my needs had the diametrical opposite concerns: I was happy to sacrifice precision for better performance and smaller code size.

I think we do have the same goals. My goal isn't to force rust-lang/libm on people. My goal is to not require floats as part of libcore, let people opt into them using a libfloat library, and make that libfloat library customizable such that you can just write RUSTFLAGS="-C libm=path/to/lower_precision/libm and choose a different libm that works for your needs. There is no one-size-fits-all here: some projects require high precision, others require small code-size, others can use hardware FP, others want a libm for soft-floats, others want to use their hardware vendor proprietary libm instead of the one of their platform (e.g. Intel SVML), and others don't want a libm at all. We allow this kind of customization for, e.g., the global memory allocator. We should also allow it for libm.

1 Like

I only mentioned this because Linux does want the compiler to generate SSE code, but only between kernel_fpu_begin and kernel_fpu_end. In this case you have to enable the SSE features, disable auto-vectorization and use SSE intrinsics only in the allowed places.

However this i really a special edge case. 99% of the time you will want to either completely allow or completely ban floating-point code in your project. If such cases fall back to soft-float then things are even simpler than your suggestion since everything is guaranteed to work, whether you lint against floats or not.

1 Like

That's certainly one way to do that. Wouldn't it also be possible to disable the feature globally, and use #[target_feature] to enable SSE, hard floats, etc. on particular functions only ? If libcore is compiled without floats then you can't still use them inside those functions, but the compiler can generate SSE instructions, so the current proposal would probably be bad for that case.

That would be acceptable, as long as it's also possible to make sure the floating-point functions exist but can't be called from most parts of the code.

Effectively, floating-point could be treated as a "possibly enabled" processor feature, like AVX. This could then use the "target feature" mechanism, and the floating-point functions in core could get annotated with appropriate required target features.


cc @rkruppe