So I am new to Rust, and that's going to show pretty quickly here. But I realized, what I feel to be, an important area for improvement in the language for scientific/numerical computing.
Consider that a developer wants to support low and high precision floats for a library. They will use num_traits and constrain by Float
. This is great and works really well.
impl<S: Float> CustomCollection<S>{
fn something(&self) -> Thing<S> {...}
}
Great.
Now, let's say the dev. has to do something obvious... Like divide two items by the constant number 2.0
... Unfortunately we cannot cast generics, even if they are constant so, what might that look like?
impl CustomCollection<f64>{
fn avgsomething(&self) -> Thing<f64> {return (&self.a + &self.b / 2.0_f64);}
}
impl CustomCollection<f32>{
fn avgsomething(&self) -> Thing<f32> {return (&self.a + &self.b / 2.0_f32);}
}
Let's assume half of the functions for this trait can be used generically and the other half require this abstraction... As the user produces their code, many users now want, I dunno float 16s for example. So they are now copy pasting code until they decide to use codegen/macros... Kind of inconvenient, maybe a symptom that there could be some sugar to help out?
What if instead we have a syntax that looked like this :
impl<S: Float> CustomCollection<S>{
fn something(&self) -> Thing<S> {...}
fn avgsomething(&self) -> Thing<S is f32> {return (&self.a + &self.b / 2.0_f32);}
fn avgsomething(&self) -> Thing<S is f64> {return (&self.a + &self.b / 2.0_f64);}
}
It would make the workflow of handling generics interacting with constants less laborious, and all it does is... Dispatches a new impl...
Surely there are a dozen alternatives here, but these types of hiccups make Rust code for numerical applications(and likely some others) a little more challenging to maintain then maybe it has to be?