I agree that the numerics side of Rust leaves much to be desired. My personal pet peeve is the design of arithmetic operation traits (Add, Mul, MulAssign etc), which works horribly with generic code, since you need to implement basically all combinations of val OP val, &val OP val, val OP &val and &val OP &val to have reasonable expression writing ergonomics (and similarly for OpAssign). Worse, implementing all that stuff for numeric types is painful, but at least automatable with macros. But it's impossible to encapsulate those requirements in a trait bound, and one must repeat a significant amount of them, because there is currently no way to put a constraint on Self that is
for<'a> &'a Self: Add<Self, Output=Self>
You can technically write a trait with a where-bound like that, but it doesn't work as a modularization tool, because the bound will need to be explicitly repeated on every type and function using a generic with this restriction. Also, type checker is prone to barfing out some really bad errors when bounds of the form for<'a> &a T: Foo are present, mostly because of type checker's limitations than something inherently wrong with those bounds.
This is also why I think that trying to introduce more kitchen-sink traits to deal with ergonomics issues of numerics is a dead end. There is just no way to really express the semantics of numeric code in current Rust. At best you can restrict numeric traits to Sized + Copy + Freeze types with some #[fundamental] tactically sprinkled in, but it still won't work the way eigen templates work. These assumptions immediately exclude a large number of important potential applications, like heap-allocated BigInt, and you still can't write a simple arithmetic expression without hitting the sharp corners of by-value vs by-ref operations. On top of that you get more complex code, likely worse compile times, likely worse optimization.
That said, I'm a bit confused by your conclusion:
I get that using eigen is simpler, but C? There is literally nothing C can do that Rust can't do, better. C also doesn't have generics or overloading, and basically can't handle static array sizes, since arrays decay to pointers. You can always write Rust like you write C, even use macros, and you'll still be better off.
In fact, for scientific prototyping I wonder how many of your problems are self-inflicted. Why do you even need to be generic over integers and floats? It's a real issue for foundational libraries like nalgebra and ndarray, but end-user applications should just choose a couple of specific numeric types and stick with them. Done, all of your generic problems gone in one fell swoop.
A reasonable counterargument is "but what if I choose f64 but later decide that f32 is better for speed and is sufficiently precise", or vice versa, "what if I go with f32 but later realise I need the precision of f64". You still don't need to be generic in all your code, though. You can solve that problem by introducing a single typedef
type Float = f64;
and using exclusively that type for all your type bounds. If you need to swap your floating-point type, just change that typedef, done. You have the potential to get confusing errors if some duck-typed API turns out to be slightly different between f32 and f64, but that's still as good as you would have with C++ templates, and I doubt there would be many such differences.
Similarly, for your integers just use
type SInt = i32;
type UInt = u32;
It's not like you are likely to need u8 or u16 out of the blue anyway, and if you need to swap between i32 and i64, it works good enough.
Again, that's not a modular approach, you can't change those typedefs in your dependencies (unless you vendor them), but I believe it's good enough to solve the practical end-user problems while the proper language features are slowly worked out.
Regarding From-conversions, while nothing technically says they need to be lossless (and I'm sure they aren't in plenty of important cases), it's not an unreasonable assumption for the end user. If we add a lossy f32 -> u32 or u64 -> f32 conversion, I'm sure there will be confused users who got a bug because they lost precision. That's not good.
My position is that From-conversions are for situations where there is a single obvious right way to do it. If there are multiple ways to do a conversion, or if it has some caveats (because it may me fallible, or lossy, or just have a nasty edge case), or if the conversion is unexpectedly expensive, or there is even a slight possibility that the conversion would work differently in the future (require extra parameters, or extra runtime preconditions, or return a different type), then the operation shouldn't be a From impl. Add it as a method, or as a separate trait. Thank me later.
The core use of traits is generic code, and generic code just can't work with subtle type-specific constraints, by definition. The end user will pass in a T: From<U> without knowing the subtle edge cases of that impl, because there just isn't a proper place to document those subtleties, nor a reason to expect them to exist in the first place. The generic function will hit a nasty bug or performance degradation, because it has no way to guard against those edge cases at all. And even if you know that your From impl has some edge cases, the issue can be triggered deep in the generic call stack, making it impossible for the end user to expect or debug such issues.
Thus I think impl From<usize> for f32 is a bad idea, even though nothing technically prevents that impl. Lossy conversion traits may be fine, but that won't do much to solve the issue of ergonomics, and the "lossy" conversion may be lossy for different reasons, which may be better modeled with different traits anyway (e.g. for integers, truncation i32 -> i8 and sign-cast i32 -> u32 are both potentially problematic, but for different reasons).
Note that if you're fine with the semantics of as-casts and just want to have some genericity, you can easily define a CastAs<T> trait wrapping that operation. Implementation of it requires some boilerplate, but num-traits already did that work for you with their NumCast trait. It seems to have the semantics that you want, allowing int-to-float lossy casts, but disallowing narrowing or sign-discarding casts between integers. I recall seeing the simpler as-cast trait in the wild as well (maybe in some older version of num-traits).
Regarding the generic literals, personally I don't get the appeal. It's not hard to use explicit T::from conversions, and you'd need to put extra trait bounds on the generic function anyway, either T: From<Float> or T: ParseFloat. I get that having to write and read T::from(1.0) is worse than just 1.0, but this can mostly be solved with a local definition:
fn foo<T: From<Float>>(..) {
let f = T::from;
let x = f(1.0) * bar + f(.35);
// etc
}
That's just marginally longer than a plain literal, so personally I don't find it a burden to use. You can cut out even more of those conversions if you put a bound like T: Mul<Float, Output=T> on your function, allowing to write bar * 1.0 instead of bar * f(1.0). A symmetric bound Float: Mul<T, Output=T> allows you to write 1.0 * bar, although this can be problematic to use in more generic code due to trait coherence issues.
And again, all of those issues are gone if you use a specific type instead of generic functions.
Overall, I believe that numeric code isn't as bad to write as you imply, moreso if you don't try to go fully generic. On the other hand, one can't expect the end users to know all of those niche tricks, and in generic library-level code the complexity becomes hard to work around. I think this really needs language-level support. I just don't believe it can be patched around by a couple of new traits, since the fundamental ergonomics and performance issues would remain.