Add something like `num_traits` to STD, or move it into std entirely

For generic number implementations in std::num, as @CodesInChaos pointed out in this comment for the case of complex numbers, num_traits would make implementations significantly more flexible, and (in my opinion) much easier. For example, it would be much easier to implement a (hypothetical) complex sine function just like this:

impl<T: Float> Complex<T> {
    fn sin(self) {
        Complex::new(sin(self.re) * cosh(self.im), cos(self.re) + sinh(self.im)
    }
}

instead of expanding over multiple implementations. Also, if this were made public, it would allow people to write their own implementation of Float and use std types with their definition of Float (for their own purposes).

A similar type already exists (core::simd::num::SimdFloat), which could be considered prior art.

The thing you need to address if you want it in core is how to to handle upgrades. In a separate crate you can always rev the major version to add new things to the trait. core can't ever.

1 Like

I'd be more than happy to have a sealed trait just to make generic implementations easier.

2 Likes

I remembered that the maintainers of num-traits have published num-primitive, a sealed version of num-traits.

For that, I wonder how far you could get with something like https://github.com/rust-lang/rfcs/pull/3686#discussion_r1755464148 instead.

Being able to have both

impl<N> TheTrait for u<N> { … }
impl<N> TheTrait for i<N> { … }

is way better than doing that via a trait, because it's no longer a top-level blanket.

As soon as you have

impl<T: std::sealed::Whatever> TheTrait for T { … }

now it's more annoying to implement it for other things too.

That I completely agreed with, for integers and the like. But there isn't a plan for generic floating-point numbers that I know of, and that is my main concern. I guess that What numeric traits make sense for std? is further discussion.

Now that we have 4 floating point types in the works, we could make f<N> work if we wanted -- probably with a trait bound of some sort since I doubt we'd even enable things like f<7> or f<123> the way that would be fine for integers.

I think using panicking default implementations for added methods is fine.

I think we'd need something more complex than that since there are multiple types that can reasonably have size 16 (IEEE 754 half f16 and brain float bf16) and waay more for size 8. there's also x87 f80 and ppc's 128-bit double double type (not to be confused with IEEE 754 f128). This makes me say that's too much complexity to try to have a generic float type like the generic integer types u<N> and i<N>, so I think having a generic f<N> is a bad idea.

5 Likes

I generally prefer to try to use standard library traits than num_traits; in particular, for "numbers that support values 0 and 1" I have taken to using From<bool> which is standard and works for most types you'd want it to work for.

Unfortunately, this doesn't work so well with float types (but you might not want to be generic over those anyway, because float calculations generally need to be rounded and the details of the rounding frequently matter, but you can't take them into account when in a generic context).

The problem with adding extra traits is that existing number types in crates won't support them, so it tends to cause the ecosystem to become fragmented somewhat.