Currently, when writting something like 1.1f64, the compiler will round it automatically, which might not what exactly users want. For example, with let a=1.1f64 and b=11 as f64, we want b/a=10f64 and b%a near 0, but even div_euclid yields b%a~1.1
In case we want to control the rounding errors, it might be better to introduce some suffix like up/down/u/d/exact/...
up/down has the same rounding direction with calling next_up()/next_down()
, but there is one difference:
1.1f32up==1.1f32(since 1.1f32 will be rounding to a little larger than 1.1)==1.1f32u(for abbr.) ==1.1u(ignore f32 type might be acceptable)
(1.1f32).next_down()==1.1f32down==1.1d==...
You could notice that, 1.1f32 has the same value to its exact value, 1.10000001.....f32. There does not exists any function which could recognize 1.1f32 and 1.1000...f32 as different input values. This is why I introduce new suffixes here.
Another interesting thing is the exact form, I once want to use exact/exactup/exactdown to let the compiler check whether user has provided enough digits, but the grammar is very confused:
(1.25e-1e-2f32
)
Maybe we should not omit f32/f64 indicator here, or perhaps we could write ud
directly as exact, while ue/de
for up_enough/down_enough
which suggests enough digits are provided, even modifying the last digit by +1 or -1 does not change the rounding results.
For real world example, we might accept:
// e for exact
const PI:f64 = 3.141592653589793115997963468544185161590576171875f64e;
const PI:f64 = 3.1415926535897932f64ed;
const PI:f64 = 3.141592653589793f64eu;
// e for enough
const PI:f64 = 3.141592653589793115997963468544185161590576171875ud;
const PI:f64 = 3.1415926535897932de;
const PI:f64 = 3.141592653589793u;