I'd like to propose a suggestion of integrating a standard complex number type like Complex<f64> and Complex <f32> into the std library.
I am recently developing a computational physics program, and in the program I need to use linear algebra libraries and complex number. But across these libraries, issues with complex number become very annoying, for example, when I was trying to use nalgebra (a linear algebra library, but sparse matrix support is not mature yet) together with sprs (a sparse matrix library), since sprs have only implemented methods for num::complex, and nalgebra will automatically convert the result into type nalgebra::complex after arithmetic, every time when I want to make a cooperation between these two libraries, I have to constantly converting from nalgebra::complex to num::complex or reversely, even though these two types are fundamentally same, just in different namespace.
And in C language we have "complex.h" and in C++ we have <complex>, but in RUST we have to use complex number from external libraries, it seems very weird for me as a Computational Physics Scientist. And I belive that with a standard complex number, RUST in Scientific Computing will benefit a lot.
Playing the Devil's advocate here: why complex numbers specifically? Aren't there also many other numerical entities you could argue the same for: rationals, fixed point numbers, quaternions, vectors, matrices, etc. Why does complex numbers deserve special treatment over those?
If you plan on submitting an RFC this would be a good question to answer in it.
Answering for "why not the quaternions, vectors, and matrices":
because there is no universal layout for those. Users will want their matrices row- or column- major, elements of vectors and quaternions to be in specific orders.
Why yes for complex numbers? Because it would allow for syntactic sugar. Formulas can get extremely verbose when you are forced to add a bunch of extra sigils for constructors, while language support could extend the current numerical literal suffixes to enable syntax like let complex = 6 * 7i.
It seems that nalgebra already depends on num-complex and re-exports num_complex::Complex, so the types should in fact be literally the same. Maybe you just need to ensure that nalgebra and sprs are using the same version of num-complex in your dependency tree.
(I'd imagine this is the typical layout people roll on their own as well. Pun intended.)
I think I've seen the argument for a small standard library, that it's better that the ecosystem provides the alternatives for cases with nuance and variation, and I agree with that in general. For complex numbers, it seems to me that the variance provided by the ecosystem is mostly in the functionality provided for a universally used type.
And I'd like to share some of my idea: for rationals and fixed point numbers, they are just numbers that are not integers, and they can be represented by float point well. Then for other types, as HjVT says, the variations of implementation can be a lot.
Back to complex number, this is a very primitive type, which appears in Schrodinger Equations, the fundamental equation that the whole Quantum Mechanic is based on, and also appears naturally when doing frequency domain analysis. Also for numeric linear algebra, all industrial standard level libraries, from blas, lapack to intel mkl, they all support 4 types: single precision, double precision, complex single precision and complex double precision.
And if we look at current crates, we can see that all implementation of complex number are the same -- a real part and a imaginary part, but with such a primitive type belonging to different crates, due to either namespace difference or version difference on re-exported num-complex the interactions between these crates become problematic.
Maybe complex number is useless in popular areas like Web, Graphics or System programming, etc. But for people like me doing research on physics and math, complex number is as important as f32 and f64, and we have suffered enough from fotran, C and C++, we just want a fast and modern language so that we can put more attention on our scientific research.
num-complex puts #[repr(C)] on it, so it should have FFI-compatible layout for T = f32 and f64 at least. However, this is not necessarily ABI compatible when it comes to calling conventions if the target does anything special for complex numbers, different than a normal struct.
Not that I'm aware of, but I'd be very skeptical of something like that. There's no telling what extensions we will want for the language in the future. I personally find it plausible that i (along with u) will be used for integers without explicit precision. That's what I'm using for deranged macros.
I think, before we'd consider adding a custom suffix, we'd first want to add complex numbers using a standard constructor (e.g. Complex(5, 7)), and see how widely used they are. Considering how easy it would be to write use core::complex::I; and 5 + 7*I, or for that matter use Complex as C; and C(5, 7), we'd need to see a lot of widespread usage to need to shorten that to 5+7i.
Adding a Complex<T> type to the standard library, for use as a vocabulary type, seems entirely reasonable. The main challenge is that such an implementation would need to provide all the trait implementations such as Add, Sub, Mul, Div, etc, since no other crate can write those traits implementations. But if someone is willing to do that, then I don't see any reason why we wouldn't accept an ACP for this.
The initial ACP and PR should just define the type and the trait impls. Subsequent ACPs and PRs can provide for other build-in functions. Let's keep this as simple as possible for the first pass.
Instead of i, j is also a reasonable and often used choice for the suffix. It's used more in engineering contexts, while i is usual in other mathematical writing.
I checked what the situation is with a few other programming languages:
The idea for literals with custom suffixes has come up many times, e.g. for non-zero literals (20nzusize), duration literals (4d + 2h + 7s), amd now complex number literals: 7i. I wonder if there should be a way for users to define their own, custom literals with custom suffixes, which could allow for literals like 100usize, 10nzusize and 4.0e7f32 to be entirely implemented in user code
In fact, this is already possible, with a custom macro! A few weeks ago I made a crate culit (for "custom literal") exactly for this purpose. It provides an attribute macro, #[culit] that folds the AST and transforms any literal with a custom suffix like 10nzusize into a macro call: crate::custom_literal::nzusize!(10). Basic usage looks like this:
use culit::culit;
use std::num::NonZeroUsize;
#[culit]
fn main() {
assert_eq!(100nzusize, NonZeroUsize::new(100).unwrap());
// COMPILE ERROR!
// let illegal = 0nzusize;
}
mod custom_literal {
pub mod integer {
macro_rules! nzusize {
// handle `0` specially
(0) => {
compile_error!("`0` is not a valid `NonZeroUsize`")
};
($value:literal) => {
const { NonZeroUsize::new($value).unwrap() }
};
}
pub(crate) use nzusize;
}
}
I think it would be interesting to explore if allowing users to define custom literals could be supported at the library level, without a need for macros and would make Rust more expressive, especially in contexts where lots of calculation is done
You can do "go to definition" on the custom literal 100nzusize and it will bring you to the macro_rules! nzusize
There’s a design space in Rust that’s almost as good as user-defined literal suffices and is available in the current language: methods. 3.14.i() isn’t terrible compared to 3.14i and as a bonus can be used with nonliteral expressions as well (let z = a + b.i()).
The same goes for NonZero and other types with preconditions, except that it would be nice to have "always consteval this const method if possible" so something like 0.nz() is guaranteed to fail at compile time without an explicit const context.
To add to this, one will also often want/need to specify the precision (single or double) of the complex number. However, adding this to the i suffix would then clash with integer suffixes. A purely imaginary number would then have to be written as 0c32 + 7i.[1]
j as suffix would avoid the name clash and any ambiguity.
Aside: it’s also unclear whether the precision should be specified for each component, or as the size of the whole complex number as suggested by @SciMind2460↩︎